What is an AI Bill of Materials (AI-BOM)?
An AI Bill of Materials (AI-BOM) is a complete inventory of all the assets in your organization's AI ecosystem. Unlike generic asset inventories, AI-BOMs capture the relationships between models, datasets, and dependencies, making AI systems traceable and auditable. By cataloging these details, an AI-BOM provides the necessary visibility to effectively secure your AI systems.
What's the difference between AI-BOM and SBOM?
AI-BOMs function similarly to SBOMs (or Software Bills of Materials) but are purpose-built for the complexities of modern AI systems.
Unlike the SBOM’s focus on static software components, AI systems involve non-deterministic models, constantly evolving artificial intelligence algorithms, and their data dependencies. This expansive and detailed approach is necessary to capture these complexities and provide the foundation for effective AI Security Operations (AI SecOps).
An AI-BOM builds on the SBOM concept but extends it beyond code to include models, datasets, and dynamic dependencies—everything that influences AI system behavior.
When to use an AI-BOM?
An AI-BOM is most valuable when you need complete visibility and control over your AI systems. You should build one if you're developing new AI projects, integrating third-party AI models and training datasets, or preparing for audits or AI compliance. By introducing an AI-BOM early, you track assets from the start and avoid blind spots later in the AI supply chain.
You should also use an AI-BOM when managing AI at scale. As your team deploys models, dependencies increase and risks rapidly multiply. An interactive AI-BOM helps them maintain order, enforce security measures, and ensure compliance, whether they’re monitoring a single high-risk model or hundreds of production workloads.
State of AI Security Report 2025
Building an AI-BOM is critical for managing AI risks, but understanding the broader AI security landscape is equally important. Wiz’s State of AI Security Report 2025 reveals how organizations are managing AI assets in the cloud, including the rise of self-hosted AI models and the security risks they pose.

What are the core components of an AI-BOM?
An effective AI-BOM should provide a layered map of your AI ecosystem and capture components in a structured way. This practice ensures clarity, traceability, and adaptability as your AI systems evolve.
Here are the key AI components your AI-BOM should capture:
Data layer
According to our AI Security Readiness report, 25% of organizations aren’t sure which AI services or datasets are active in their environment. This lack of visibility makes it harder to spot issues early and increases the chance that security risks, compliance failures, or data exposure will go undetected.
🛠️ Action steps:
Capture all dataset names, versions, and formats.
Record provenance (data origin) and privacy requirements.
Include sensitivity levels for classifying data and associated risks.
Link datasets to compliance frameworks and governance policies.
Model layer
Our 2025 State of AI in the Cloud report reveals that 75% of organizations use self-hosted AI models, and 77% rely on dedicated AI or ML software. Even so, visibility often remains limited or siloed across tools. Without a clear model lineage, your team risks running outdated or unverified AI models, which can introduce vulnerabilities and compliance gaps.
🛠️ Action steps:
Record different AI models’ names, types, and algorithm specifications.
Document hyperparameters (pre-training parameters), training methods, and model versioning.
Link each model to its training datasets, dependencies, and version history for reproducibility.
Dependency layer
AI-BOMs create a schema of the different software components in your AI stacks and their integrations with other tools. This view helps uncover hidden or outdated dependencies—one of the most common and overlooked security flaws—making dependency tracking critical to the success of your AI-BOM.
🛠️ Action steps:
List third-party libraries, frameworks, and runtime environments.
Capture APIs, SDKs, and integration endpoints across the AI stack.
Track both direct and third-party dependencies to analyze hidden issues.
Record version numbers and update history for each dependency.
Infrastructure layer
Wiz’s AI Security Readiness report also found that 45% of survey respondents run hybrid environments, and 33% operate across multiple clouds. With AI workloads spread across environments like this, it’s difficult to know where models are running, what resources they consume, and how infrastructure risks might impact them—especially without clear visibility.
🛠️ Action steps:
Document servers, GPUs, and networking devices across environments.
Capture cloud provider details, including regions, tenancy, and configuration details, for hybrid and multi-cloud environments.
Record scaling needs and establish performance baselines for monitoring.
Security and governance
Additionally, the AI Security Readiness report found that 31% of organizations list a lack of AI security expertise as their top challenge. This skills gap makes it more challenging to apply consistent safeguards across different environments. By embedding security controls and governance policy directly into the AI-BOM, you can reduce dependency on individual expertise and enforce security more consistently.
🛠️ Action steps:
Specify encryption and access controls for datasets and models.
Define policy-as-code (PaC) rules and compliance mappings.
Log audit trails, ownership changes, and model or data drift.
Record governance measures like risk scores and approval workflows.
People and processes
You need to establish clear ownership of your AI system's components. Without it, shadow AI initiatives can slip into production, creating risks and compliance gaps. To prevent this, you must ensure accountability and governance across your entire AI lifecycle.
🛠️ Action steps:
Map team roles, responsibilities, and owners for each dataset and AI model.
Capture CI/CD workflows, training and retraining schedules, and approval flows.
Usage and documentation
Documenting the intended use, potential misuse, and ethical considerations of your AI models can reduce model bias, avoid regulatory fines, and mitigate reputational harm.
🛠️ Action steps:
Define input and output specifications for each AI model.
Record intended use cases and potential misuse scenarios.
Capture bias checks, ethical implications, and fairness considerations.
Document licensing requirements and restrictions for datasets, models, and libraries.
Most importantly, your AI-BOM should also account for any custom extensions specific to your organization's needs, as well as a digital signature to ensure authenticity and integrity.
By acting as a central hub of information, the AI-BOM makes it easier to secure, manage, and adapt your AI systems as they evolve.
How does the AI-BOM enable key security functions?
While the AI-BOM provides a complete inventory of AI assets, modern and proactive security requires pairing it with advanced capabilities that automate security enforcement, detect drift in real time, and integrate directly with developer workflows.
Here are essential features that transform your AI-BOM inventory into a proactive security control:
Policy-as-code enforcement and remediation
PaC is the practice of treating policies as code, which teams integrate directly into development workflows. This allows you to convert compliance requirements into automated scripts that run every time you build, deploy, or update AI systems.
With PaC, developers can verify every component in the AI-BOM—datasets, models, dependencies, and infrastructure—against pre-defined rules. This provides AI teams with fine-grained control, allowing them to take immediate actions, such as blocking model deployment if the dataset lacks approval or has dependencies that fail security baselines.
The real advantage of PaC is that it automates AI SecOps workflows. For instance, the Wiz AI-SPM integrates with Wiz AI-BOM to detect vulnerabilities in AI pipelines and trigger remediation flows that implement the correct fixes. This allows your team to create flexible workflows, such as rolling back a deployment, updating an insecure library, or alerting the relevant team before pushing insecure AI components into production.
Continuous drift detection
AI systems are constantly evolving, resulting in rapid dependency updates and infrastructure changes. Even small shifts—like swapping a dataset without approval, retraining a model with different parameters, or updating a dependency to a vulnerable version—can open attack paths and undermine governance and compliance efforts.
Incorporating drift detection into your AI-BOM provides a proactive safeguard against these vulnerabilities because it instantly identifies unsafe changes by tracking every component in real time. For example, if a model suddenly starts pulling from a new dataset or someone upgrades a dependency outside of policy, the system will raise an alert.
Effective AI-BOMs can provide the foundation for drift visibility, but connecting the dots across data, models, and infrastructure layers to explain the potential impact is typically an advanced feature of integrated security platforms like Wiz AI-SPM. Platforms like these build on the AI-BOM to deliver richer context and faster, more precise remediation when the system triggers alerts.
How do AI-BOMs help with AI supply chain risk management?
No AI model relies solely on in-house code and APIs. They inherit dependencies from open-source repositories, third-party model providers, and pre-trained architectures, like Hugging Face, TensorFlow Hub, or OpenAI APIs. Just as traditional software supply chains face vulnerabilities through third-party libraries, AI systems introduce new risks via external datasets, model weights, and training pipelines.
An AI-BOM plays a crucial role in AI supply chain security by providing the following features:
Tracking third-party models: AI-BOMs document pre-trained models, their sources, and any modifications, helping your team assess the trustworthiness of external models and avoid introducing unverified components into production.
Identifying data lineage: By capturing how your team collects and labels datasets, AI-BOMs reveal whether the data may carry biases, privacy risks, or quality issues. This is key for auditability and ethical AI practices.
Managing model dependencies: AI-BOMs log the AI frameworks and libraries your models use, tracking adherence to secure, approved versions. This continuous tracking reduces vulnerabilities in production.
Monitoring for adversarial risks: AI-BOMs provide the inventory and provenance you need to track AI model components. This structured visibility enables threat detection by supplying security tools with the necessary context to monitor for patterns and signs of compromise.
Example: Suppose your AI system uses a pre-trained NLP model from an open-source repository. If that model contains undocumented vulnerabilities or was trained on biased or sensitive data, you unknowingly introduce risks into production. Security teams can use an AI-BOM to flag such dependencies, enforce version controls, and provide the structured visibility that external tools require to monitor for known vulnerabilities in AI supply chains.
What are the benefits of AI-BOMs?
As organizations scale their AI operations, the complexity and volume of AI risks grow rapidly. That’s where AI-BOMs help. They provide a foundation for teams to manage risk, compliance, and governance.
Here are four key benefits of implementing an AI-BOM in your organization:
Gain complete visibility into your AI landscape: AI-BOMs uncover hidden risks by identifying shadow AI tools, outdated components, unvetted datasets, and insecure dependencies in AI models.
Simplify AI regulatory compliance: They provide the documentation and traceability your organization needs to meet audit requirements and stay aligned with frameworks like the EU AI Act and the NIST AI Risk Management Framework (AI RMF).
Increase transparency and build stakeholder trust: AI-BOMs outline how AI systems operate, the safeguards in place, and how your team addresses issues such as bias or misuse.
Strengthen governance across the AI lifecycle: By tracking ownership, usage, and change history, AI-BOMs support reproducibility, auditability, and internal policy enforcement across teams.
Key components of an AI-BOM
To be effective, an AI-BOM must support different AI use cases and technologies in relevant detail while still providing an easy-to-navigate interface. Below is a comprehensive, but non-exhaustive, breakdown of the key components that an AI bill of materials should include:
Datasets: Names, versions, formats, and any associated privacy requirements
Models: Information such as model names, types, algorithms, hyperparameters, training methods, and versioning details
Software components: Third-party libraries, frameworks, runtime environments, proprietary code, and integration points within the AI stack
Hardware requirements: Details about servers, GPUs, cloud infrastructure, and networking devices
Security specifications: Encryption methods, access controls, and governance measures
People and processes: Team roles, developer and owner information, and details about operational workflows like CI/CD pipelines
Usage documentation: Input/output specifications, intended and potential misuse cases, bias considerations, ethical implications, and licensing details
Most importantly, your AI-BOM should also account for any custom extensions or modifications specific to your organization’s needs, as well as a digital signature to ensure authenticity and integrity.
By acting as a central hub of information, your AI bill of materials makes it easier to secure, manage, and adapt your AI systems as they evolve.
How does an AI-BOM help with GenAI security?
Generative AI (GenAI) solutions introduce specialized AI security risks that you can only track by gaining complete visibility of your GenAI adoption. Unlike traditional predictive AI, GenAI dynamically generates text, code, and media, making your ecosystem vulnerable to data leakage, adversarial manipulation, and untracked dependencies.
An AI-BOM helps mitigate these risks by providing the following capabilities:
Detecting sensitive information exposure by documenting how your GenAI models handle input and output data.
Recording the external APIs and libraries of your GenAI systems, helping security teams monitor for risky integrations.
Identifying unauthorized model versions or bypassed safeguards so you can easily detect signs of model tampering or unapproved modifications.
Supporting the monitoring of AI model drift and compliance risks by recording model lineage and training data changes, which external tools can ingest for live tracking.
As adoption grows, embedding GenAI-specific security controls into your AI-BOM is critical to mitigating GenAI risks and maintaining trust and compliance across your AI ecosystem.
How AI-BOMs Help with Compliance Frameworks
AI security and compliance are rapidly evolving, with new frameworks emerging to guide risk management and enforce governance. To keep up, you need tools that provide structure and traceability. An AI-BOM serves as that foundation, helping your organization meet industry standards and regulatory requirements by making it easier to trace, audit, and secure AI assets.
Here’s how an AI-BOM helps with maintaining compliance:
Regulatory and industry alignment
AI-BOMs are essential for maintaining regulatory and industry alignment by capturing and mapping the relationships of key details within your AI systems. The following sections highlight how AI-BOMs support your organization’s compliance and governance efforts for these frameworks:
NIST AI RMF
AI-BOMs align with the NIST AI RMF's emphasis on model governance, transparency, and continuous monitoring.
They help organizations document risks across the AI lifecycle and ensure AI deployments maintain trustworthy AI principles.
EU AI Act
AI-BOMs help meet the regulation’s strict requirements on transparency, risk assessment, and documentation by capturing details about model components, training datasets, third-party dependencies, and system usage.
They enable compliance by tracking high-risk AI systems, identifying proprietary components, and ensuring proper documentation for audits.
AI-BOMs support this compliance by documenting critical AI lifecycle elements, like model provenance, training data, and system ownership.
They provide structured visibility into how your team builds, governs, and updates AI components—key for aligning with this framework’s requirements.
Model risk management (MRM) in financial services
Banks and financial institutions follow MRM frameworks to govern the use of AI models for lending, fraud detection, and risk assessment.
AI-BOMs improve MRM by tracking model lineage, ensuring transparency, and flagging unauthorized model modifications.
How to build an AI-BOM
Developing an AI-BOM may seem complex, but it becomes more manageable when you approach it with a straightforward, step-by-step process. By following these steps, you can guide your organization from initial visibility planning to full, confident automation:
Plan and scope: Start by identifying the AI systems, teams, and environments the AI-BOM should cover. This step ensures clear boundaries and alignment between your AI-BOM and organizational goals. You must also decide whether the AI-BOM will serve a single project or many projects across your organization's broader AI ecosystem.
Select a framework: Use established frameworks, like SPDX AI, to capture essential details about the datasets, AI models, dependencies, and infrastructure. Templates like this save time and ensure your AI-BOM doesn’t miss critical components. Once you've chosen a framework, consider how you'll present and manage the information. Designing the AI-BOM as an interactive catalog makes it easier for teams to browse components and helps keep entries up to date as your AI systems evolve.
Catalog the components: Create an initial inventory by recording the core components of your AI-BOM. As you document each AI component, assign clear ownership so it’s easy to identify who is responsible for those assets. This traceability lays the groundwork for visibility across your AI ecosystem, helping you answer questions like “What assets do we have?” and “Who’s accountable for this risk?”
Operationalize your AI-BOM: Automate dynamic data collection by integrating your AI-BOM with CI/CD and MLOps pipelines. You can use pipeline scripts and automation tools to extract metadata from your AI components during runtime. Implementing automatic updates helps keep the inventory current—the pipeline generates a fresh AI-BOM every time your team trains a new model version or updates a dependency. Finally, centralize and enable version control to preserve a complete history of the AI system.
Implement continuous monitoring and enforcement: Make your AI-BOM proactive by automating security enforcement. Embed automated checks into your pipelines that enforce compliance gates. Next, you’ll want to track component behavior to detect unauthorized changes or drifts, since even minor, unapproved changes to models or dependencies can introduce security vulnerabilities or break compliance. Finally, set up remediation workflows to act on policy violations.
With these steps in place, your AI-BOM becomes a foundation for managing AI assets and supporting security, compliance, and governance as your systems scale.
However, challenges can still arise due to difficulties in maintaining accuracy in dynamic AI environments, managing third-party components, and aligning with evolving regulations. To address these challenges, consider advanced solutions like Wiz’s AI security posture management (AI-SPM).
How to implement an AI-BOM with Wiz
Creating and maintaining AI-BOMs can be challenging, especially as AI ecosystems grow in scale and complexity. But Wiz’s AI-SPM platform automates this process by providing continuous visibility, real-time risk monitoring, and compliance management for all your AI assets.
With its agentless, cloud native AI security, Wiz helps organizations operationalize their AI-BOM effectively through these features:
Automated discovery and inventory: Wiz automatically scans your AI ecosystem across cloud, on-premises, and third-party services, identifying and cataloging AI assets from the data, model, dependency, and infrastructure layers.
Real-time AI risk monitoring: Our platform continuously monitors your AI-BOM for security risks, identifying threats such as outdated or insecure AI libraries, data leakage, model tampering, and compliance gaps. Mika AI, Wiz's AI security assistant, enhances this monitoring by automatically analyzing patterns across your AI assets, providing intelligent recommendations for risk mitigation, and helping security teams understand complex AI attack paths through natural language explanations.
Interactive, user-friendly interface: Wiz’s interactive cataloging makes it easy to navigate and manage your AI assets. The catalog supports features like tags, filters, search, visualization, and digital signatures. The Wiz SecOps AI Agent takes this further by enabling teams to query their AI-BOM using natural language, automatically investigate security incidents involving AI components, and generate remediation playbooks tailored to your specific AI infrastructure.
Seamless integration with AI-SPM: Wiz integrates your AI-BOM with full AI-SPM to incorporate features like drift detection, governance and policy enforcement, and automated compliance reporting.
Whether you’re dealing with AI-specific vulnerabilities or preparing for future regulations, Wiz has the tools to enable your organization to confidently innovate with AI while maintaining robust security and compliance. See how Wiz's AI-SPM can help you eliminate blind spots, automate risk detection, and meet evolving compliance demands.
Ready to see our AI capabilities in action? Request a demo to experience how Wiz streamlines AI security operations from code to cloud.
Sample AI Security Assessment
Get a glimpse into how Wiz surfaces AI risks with AI-BOM visibility, real-world findings from the Wiz Security Graph, and a first look at AI-specific Issues and threat detection rules.
Get Sample Report