What is shadow AI?
Shadow AI is the unauthorized use of AI tools within organizations without IT approval or security governance. Employees are increasingly adopting these tools independently to boost productivity, often bypassing official governance processes when they do so. In fact, Deloitte's 2026 State of AI in the Enterprise report found that worker access to AI rose by 50% in 2025 alone, yet only one in five companies has a mature governance model to oversee how that AI is actually being used.
The core issue here is that open-source AI platforms and user-friendly interfaces have made powerful AI capabilities accessible to anyone, creating a gap between what employees can access and what organizations can control.
While its capabilities offer significant productivity and personalization benefits, its use also poses security risks. For instance, OpenAI often uses interactions for model training unless users opt out, which creates the potential for exposure of private or sensitive training data. This issue has prompted many organizations to draft AI-specific security policies to mitigate such risks.
Banning AI outright, however, can backfire, leading to greater use of unauthorized tools and missed opportunities. So to safely unlock AI’s business potential instead, organizations need to encourage responsible adoption within secure frameworks. This can curb the spread of shadow AI while leveraging its transformative benefits.
AI Security Sample Assessment
See how organizations are tackling these shadow AI challenges with our sample security assessment report.

Shadow AI vs. shadow IT
Shadow AI is a newer variant of a problem that security and IT teams already know well: shadow IT. While both involve unsanctioned tools operating outside official controls, there are important differences in how they emerge, who uses them, and the types of risk they introduce:
Shadow IT refers to the general use of unauthorized technology, like apps or devices, outside of an organization’s IT framework. It often stems from employees finding workarounds to meet their needs but can create security vulnerabilities.
Shadow AI is similar to shadow IT, but it specifically focuses on unauthorized AI programs and services and their unpredictable, constantly evolving models, which makes them harder to secure. Governance frameworks for AI are still under development, which only adds to the difficulty.
Unlike shadow IT, which only developers or tech-savvy users tend to use, shadow AI sees adoption by employees across all roles. This creates a much wider, less predictable attack surface. Because of this, addressing shadow AI requires a focused approach that extends beyond traditional shadow IT solutions. For instance, organizations need to educate their users, encourage team collaboration, and establish governance that’s tailored to AI’s unique risks.
What causes shadow AI?
These three organizational gaps create the perfect environment for shadow AI to flourish:
Widespread availability: Modern AI tools require no technical expertise or approval processes, which means employees can access powerful language models and automation tools instantly.
Insufficient governance: Most organizations lack comprehensive AI policies, a gap that often starts at the top. A Deloitte survey even found that 66% of boards have little to no experience with or knowledge of AI. Without clear guidelines for tool selection and usage, employees must make independent decisions about AI adoption, which then creates shadow AI concerns.
Unmet business needs: Employees often adopt AI tools to close gaps in productivity, automate repetitive tasks, or speed up work when approved solutions don’t meet their requirements.
When these conditions align, AI tools will proliferate across organizations that lack oversight, creating significant security and compliance risks.
Shadow AI risks
Shadow AI creates several critical business risks that can compromise data security, operational integrity, and regulatory compliance. In fact, Gartner expects that over 40% of organizations will experience incidents related to compliance and security due to shadow AI by 2030.
Below are three imminent threats that modern organizations should be wary of:
1. Data Exposure and the "Vibe Coding" Trap
Data exposure is the most immediate threat stemming from shadow AI. When employees use unauthorized AI tools to vibe code, or build applications rapidly through natural language prompts, security often takes a back seat to speed. Without proper data handling agreements, these tools can leak proprietary code, customer data, and strategic secrets into public training sets or expose them through misconfigured backends.
Real-world example: In February 2026, Wiz researchers discovered a massive database breach in Moltbook, a viral social network for AI agents. Because the platform was "vibe-coded" without essential security protocols like Row Level Security (RLS), 1.5 million API keys and 35,000 user emails were exposed. This allowed anyone to hijack AI agents and access sensitive third-party services like OpenAI and AWS.
2. Misinformation and Agentic Manipulation
GenAI models often hallucinate when uncertain, but a newer, more dangerous risk lies in Agentic AI. These are AI copilots designed to take actions autonomously—like booking travel or accessing databases. If these agents are fed misinformation or manipulated via prompt injection, they may execute harmful actions without a human in the loop.
Real-world example: Throughout 2025, security researchers tracked agentic browsers, including Perplexity and Opera, and found them vulnerable to Indirect Prompt Injection. In these cases, a malicious actor could place hidden instructions on a website that, when read by an AI agent, would trick the agent into leaking the user's sensitive data or payment information via background API calls.
Securing AI Agents 101
AI agents are changing how work gets done. This one-pager explainer breaks it all down.

3. AI-Powered Malware and Supply Chain Risks
As AI matures, so does the malware designed to exploit it. Modern threats no longer just steal files; they weaponize a developer's own AI tools to automate the theft of credentials and spread across the entire organization’s software supply chain.
Real-world example: The s1ngularity and Shai-Hulud attacks of late 2025 marked a turning point in AI security. This AI-powered malware hijacked developers' local command-line tools (like Claude and Gemini) to identify and exfiltrate GitHub and npm tokens. Once stolen, the malware used these credentials to automatically infect and republish thousands of malicious code packages, creating a self-propagating "worm" that bypassed traditional static security scans.
The benefits of managing shadow AI technologies
While addressing shadow AI directly reduces risk, it also unlocks safer, more scalable AI adoption across your business. Here’s what organizations stand to gain by managing shadow AI:
Clear visibility and control
By discovering where and how employees already use AI, security and GRC teams gain an accurate inventory of tools, data flows, and use cases. That visibility is the foundation for effective policies, guardrails, and monitoring.
Reduced security and compliance exposure
Bringing AI usage into sanctioned, monitored channels limits where sensitive data can go and how organizations can process it. This also lowers the likelihood of data leakage, regulatory violations, and costly incident response.
Faster, safer AI enablement
When teams know which AI tools and patterns are approved, they can move quickly without improvising around security controls. This centralized governance shortens the path from a “good idea” to a production-ready AI use case.
Stronger governance and audit readiness
Documented AI inventories, risk assessments, and controls make it easier to demonstrate compliance to regulators, customers, and internal auditors, which turns AI governance from a fire drill into a repeatable process.
Higher employee trust and adoption
Clear guidance, along with secure tooling, signals that leadership wants employees to use AI—just not at the expense of security. That balance encourages responsible experimentation instead of risky workarounds.
Looking for AI security vendors? Check out our review of the most popular AI Security Solutions ->
10 best practices to mitigate shadow AI
Effective shadow AI management requires a structured approach that balances security with innovation. This framework helps organizations reduce unauthorized AI usage while enabling productive AI adoption across teams.
Here are 10 best practices that you can use to mitigate shadow AI in your own organization:
1. Define your organization’s risk appetite
A risk appetite assessment establishes the foundation for AI governance decisions. When completing one, organizations must evaluate their compliance requirements, operational vulnerabilities, and potential reputational impacts to determine appropriate security levels.
Using this type of assessment drives practical decisions. For instance, you might find that low-risk applications can operate with basic oversight, while high-risk use cases require comprehensive controls and monitoring.
To get started, create clearly defined risk categories to guide your tool selection and implementation strategies.
2. Adopt an incremental AI governance approach
Incremental implementation prevents governance overload and reduces employee resistance, which builds organizational confidence while maintaining security standards. Using this kind of phased approach also means that risk exposure will remain manageable, teams will be able to provide valuable feedback, and policies can evolve based on real-world usage patterns.
This type of approach typically begins with instituting pilot programs in controlled environments , then expanding AI adoption across the organization.
3. Establish a responsible AI policy
Employees need clear guidance on acceptable AI use, which makes a well-defined responsible AI policy essential. This policy should outline the types of data that they can process, prohibited activities, and security protocols that everyone must follow. It should also address data management practices to ensure that employees handle sensitive information securely and consistently, with a strong emphasis on maintaining data privacy. Additionally, it should require all new AI projects to undergo review and approval by your organization’s IT department before implementation.
But remember that regularly updating this policy is equally as important as creating the policy in the first place. That’s because AI technology evolves rapidly, and so do the risks it presents. If you treat the policy as a dynamic resource that adapts to new challenges and opportunities, though, you’ll be able to keep it aligned with your organization’s needs and security priorities.
4. Engage employees with AI adoption strategies
Hosting surveys or workshops can uncover the tools that your employees are using to fill in the gaps in your approved technology, as well as why they’re doing so. This insight will help you pinpoint governance weaknesses and identify opportunities to meet their needs with sanctioned solutions.
Involving employees in this way also helps you make sure that AI initiatives align with their workflows, which increases the usefulness and practicality of your governance strategies and reduces reliance on unauthorized tools.
5. Collaborate across departments to standardize AI usage
AI adoption touches multiple areas of an organization, so ensuring that all relevant teams—like IT, security, compliance, and operations—are aligned is critical. They must work together to create consistent standards for selecting, integrating, and monitoring AI tools to simplify oversight and reduce risks.
When every department follows the same rules, gaps in security are easier to spot, and the overall adoption process will become more streamlined and efficient as a result.
6. Provide training and support
Educating employees about AI risks and best practices is one of the most effective ways to reduce shadow AI. To do this, focus on practical guidance that fits their roles, such as how to safeguard sensitive data and avoid high-risk shadow AI applications.
Alongside training, you should also offer ongoing support via help desks, detailed guides, or digital adoption tools. These resources empower employees to use AI tools responsibly while giving them the confidence they need to securely navigate the challenges they encounter.
7. Prioritize AI solutions by risk and business impact
Not all AI tools are equal, which means you should focus first on low-risk, high-value applications. Automating simple tasks like these that don’t handle sensitive data can yield quick wins with minimal exposure. These tools also serve as a foundation for demonstrating AI’s benefits to your teams.
This type of strategic AI deployment typically follows a risk-based prioritization framework that balances business value with security requirements:
Phase 1: Deploy high-value, low-risk solutions first. These include tools with strong data privacy guarantees and no model training on user inputs.
Phase 2: Plan for high-value, high-risk applications while building internal capabilities. You should also consider on-premises solutions for sensitive workflows to maintain complete data control.
Phase 3: Implement comprehensive support systems, including training resources and usage guidelines, to ensure secure adoption across the organization.
After establishing a strong governance framework, you can then introduce more advanced tools. For example, with high-risk applications, you’ll want to apply stricter controls to effectively manage their business value against their potential risks.
8. Regularly audit shadow AI tool usage
Unauthorized AI usage can remain hidden unless you actively monitor it. To counter this, be sure to conduct routine audits to identify shadow AI tools, assess their data security risks, and decide whether to remove or formally adopt them into your approved technology stack.
These audits also reveal how employees use AI, which will give you valuable insights for refining your governance strategy. If your employees repeatedly use certain tools without approval, for instance, it may signal a gap in your sanctioned offerings that you need to address.
9. Establish clear accountability for AI governance
Assigning accountability ensures that your organization will implement and monitor AI policies effectively. The best way to do this is to designate a team or leader to be responsible for overseeing AI usage, maintaining compliance, and managing risks. You’ll want to make their role and authority clear across the organization, too.
Having a dedicated point of contact for AI governance simplifies communication and decision-making and thus helps you address risks promptly while creating consistency in enforcing policies.
10. Continuously update AI governance processes
Because AI technology changes rapidly, governance must evolve alongside it.
To get started here, schedule regular reviews of your policies to find out where you can incorporate new best practices, address emerging risks, and align with evolving business goals. During these updates, you should also involve cross-departmental teams and solicit employee feedback to keep your governance processes relevant and practical. This will also create a culture of adaptability that will keep your organization ahead of potential challenges.
AI Insights from 100 Cloud Architects, Engineers, and Security Leaders
Where organizations are in their cloud journey, how they’re using AI, what their top concerns are, and the strategies they’re using (or not using) to protect these dynamic environments.

Secure your organization against shadow AI risks
Managing shadow AI is about more than just blocking tools. It also means building a culture of responsible, secure AI use by making it easy for employees to report new AI tools and ask questions, reviewing your policies regularly and updating them as technology changes, and working with IT, security, and business teams to set clear guidelines for what’s allowed and what’s not.
You may also want to consider using a platform like Wiz to help you discover shadow AI activity, assess risk, and put guardrails in place. According to our State of AI in the Cloud 2025 report, over 85% of organizations are using managed or self-hosted AI services, which makes visibility and governance essential. Wiz helps here by giving you visibility into which AI tools your employees are using, who’s using them, and what data they touch. With this insight, you can then make smarter decisions, reduce risk, and help your teams use AI safely and effectively.
Ready to take control of shadow AI in your cloud environment? Request a demo today to learn how Wiz can help you discover, assess, and secure AI usage across your organization.