What is an AI bill of materials?

An AI bill of materials (AI-BOM) is a complete inventory of an organization’s AI ecosystem, including AI models, datasets, services, infrastructure, and third-party dependencies, along with the relationships between them.

AI-BOMs use structured formats like SPDX extensions to make AI components easier to share, audit, and reason about across teams, much like a software bill of materials (SBOM). Unlike a simple list, an AI-BOM captures how models connect to data, services, and environments, providing the traceability teams need to understand how AI systems operate.

What’s the difference between AI-BOM and SBOM?

AI-BOMs share the same function as SBOMs, but AI-BOMs address the unique complexities of modern AI systems. Unlike an SBOM’s focus on static software components, AI systems involve non-deterministic models, evolving algorithms, and data dependencies. Capturing these complexities provides the foundation for effective AI Security Operations.

An AI-BOM builds on the SBOM concept, extending beyond code to include models, datasets, and dynamic dependencies—everything that influences AI system behavior.

Why have AI-BOMs become essential?

An example of how Wiz maps the visibility of an AI-BOM to the Security Graph

The following converging forces make AI-BOMs a critical component of responsible AI governance:

  • AI risk and transparency demands: As organizations embed generative AI and AI-powered applications into business operations, they need clear visibility into the AI assets they’re running and how those assets might introduce vulnerabilities or compliance gaps.

  • Regulatory pressure: New policies, like the EU AI Act and the NIST AI Risk Management Framework, require organizations to maintain detailed records of AI components, their usage, and their associated risk profiles. The United States’ Executive Order 14110 on AI governance further emphasizes the need for traceability in AI systems.

  • Supply chain security concerns: The AI attack surface extends beyond your own infrastructure to include third-party models, open-source libraries, and AI services. Just as supply chain attacks target software dependencies, AI systems face similar risks from compromised models, poisoned datasets, and vulnerable APIs.

  • Internal governance requirements: Organizations implementing responsible AI initiatives need mechanisms to track model lineage, enforce usage policies, and ensure that AI adoption aligns with business values. AI-BOMs provide the data foundation to make these governance initiatives auditable and enforceable.

Wiz also helps security teams build and maintain AI-BOMs by providing continuous visibility into cloud-hosted model dependencies, automatically discovering AI services, and mapping their connections across your infrastructure.

Real-world example: Wiz researchers discovered architecture risks for AI-as-a-Service providers 

In April 2024, Wiz Research uncovered critical isolation vulnerabilities in Hugging Face’s AI-as-a-Service platform by uploading a malicious pickle-formatted model that achieved remote code execution and potentially enabled cross-tenant access to other customers’ models and sensitive data.

The vulnerabilities stemmed from gaps that a comprehensive AI-BOM would have surfaced: insufficient sandboxing in shared inference infrastructure, overly permissive container registry access, and Amazon EKS IMDS exposure. To resolve them, Hugging Face worked closely with Wiz to validate findings and implement mitigations, including enhanced tenant separation and improved security controls.

Hugging Face’s experience demonstrates how AI-BOM visibility, along with continuous monitoring, helps organizations detect supply chain risks in AI systems before attackers can exploit them. By maintaining a complete inventory of AI components, their dependencies, and access paths, security teams can identify dangerous configurations and enforce guardrails as AI systems scale.

7 core components of an AI-BOM

An AI-BOM captures more than a list of models. For effective security, governance, and operations, an AI-BOM documents the full set of components powering an AI system and their relationships.

A complete AI-BOM includes the following seven core components at a minimum:

1. Data layer

The data layer captures all data assets AI systems rely on for training, inference, and storage. Understanding these data dependencies is essential for managing data scientists’ workflows and ensuring compliance with data privacy regulations. 

Here are the components of the data layer:

  • Training data: The datasets teams use to train or fine-tune models, including their origin, licensing, and any applied preprocessing 

  • Inference-time data: Data sources the model accesses during production, such as real-time APIs, feature stores, or data warehouses

  • Data stores: The underlying storage systems (like cloud storage, databases, or vector databases) that hold AI-related data

2. Model layer

The model layer tracks AI models, their metadata, and their evolution over time. Tracking these models lets teams maintain control over versions and configurations. 

The model layer includes these components:

  • Foundation models: Pre-trained models from providers like OpenAI, Anthropic, or open-source repositories

  • Fine-tuned models: Models customized for specific use cases through transfer learning or additional training

  • Internally trained models: Models developed entirely in-house, including custom architectures and algorithms

  • Model versions and configurations: Specific versions deployed in production, along with their hyperparameters and deployment contexts

3. Dependency layer

The dependency layer identifies potential vulnerabilities in the AI supply chain and helps teams track where security risks originate across the software stack. 

The dependency layer is comprised of:

  • ML frameworks: Software development frameworks (like TensorFlow, PyTorch, or JAX) teams use to build and run models

  • AI SDKs: Libraries for interacting with hosted AI services (like OpenAI SDK, Anthropic SDK, or LangChain)

  • Third-party packages: Supporting libraries and open-source components that models depend on

  • Runtime dependencies: Everything that’s necessary for training, serving, or orchestrating AI models in production

4. Infrastructure layer

The infrastructure layer tracks the hardware and cloud resources supporting AI workloads. Proactive management of these is crucial for AI risk management and cost optimization across cloud environments. 

The infrastructure layer includes:

  • Compute resources: GPUs, TPUs, and other acceleration hardware AI workloads use

  • Storage and networking: The cloud infrastructure supporting AI operations, including network paths between components

  • Cloud environments: Accounts, regions, and deployment boundaries where AI workloads run

5. Security and governance

The security and governance layer enables teams to assess exposure and implement least-privilege access for AI systems by providing comprehensive visibility into identities and access patterns. 

The following components comprise the security and governance layer:

  • Identities and access: Service accounts, roles, permissions, and credentials AI systems use

  • Access paths: How models connect to data sources, downstream services, and external APIs

  • Security controls: Policies, guardrails, and validation mechanisms that teams apply to AI components

6. People and processes

People and processes support accountability and reproducibility across the AI lifecycle by clearly documenting who manages what and how changes occur. 

The following elements comprise this layer:

  • Ownership: Clear assignment of responsibility for each AI component

  • Change history: Audit trails that show who modified components, when, and why

  • Approval workflows: Processes that govern how AI components move through development, testing, and production

7. Usage and documentation

The usage and documentation layer provides context on how AI systems behave and evolve, enabling teams to maintain model quality and compliance over time. 

Implement these features for usage and documentation:

  • Model lineage: Upstream and downstream relationships that show how components connect

  • Use cases: Documentation of intended model purposes and acceptable usage boundaries

  • Performance metrics: Accuracy, latency, and other measures teams track over time

These seven components enable an AI-BOM to map how AI systems actually operate in production. By capturing both assets and their relationships, the AI-BOM provides the foundation for traceability, risk assessment, and governance as AI systems evolve.

How do AI-BOMs enable key security functions?

An example interactive AI-BOM catalog that Wiz’s AI-SPM autogenerated

AI-BOMs underpin these security use cases and benefits across the AI lifecycle:

  • Discovery and inventory: AI-BOMs identify the models, datasets, services, and dependencies running across environments. Surfacing unmanaged or undocumented AI usage as systems evolve lets teams address AI security risks before they escalate.

  • Traceability and explainability: Organizations understand how teams build models, where they run, and the data and services they rely on. Mapping AI behavior back to underlying components simplifies reviews and incident investigations.

  • Risk assessment and prioritization: Teams evaluate exposure based on access to sensitive data, permissions, and downstream dependencies. Using an AI-BOM to prioritize issues through real relationships between components replaces treating findings in isolation.

  • Governance and compliance: AI-BOMs support audits, internal reviews, and regulatory requirements with a structured record of AI components. The record also demonstrates ownership, controls, and change history, helping organizations meet cloud compliance requirements.

  • Change management and incident response: Before deploying updates, teams assess the impact of model changes, data updates, or dependency upgrades. During incidents, AI-BOMs speed up investigations by identifying affected AI components and their blast radius.

This structured record provides the shared context engineering, security, and governance teams need to move beyond static documentation or tribal knowledge. According to the AI Security Readiness Report, organizations with mature AI governance practices manage AI risks and respond to cybersecurity threats significantly better than their peers. Leveraging an AI-BOM lets organizations reason about AI systems using a consistent, traceable view that evolves alongside production.

How AI-BOMs help with compliance frameworks

AI-BOMs serve as the technical foundation for meeting emerging AI governance requirements. They interact with standard compliance frameworks in the following ways:

  • NIST AI Risk Management Framework: NIST’s framework emphasizes transparency, traceability, and continuous monitoring of AI systems. AI-BOMs deliver the structured inventory needed to demonstrate these capabilities during audits.

  • EU Artificial Intelligence Act: The EU AI Act requires organizations to maintain technical documentation for high-risk AI systems, including details about training data, model architecture, and validation processes. AI-BOMs capture this information in a format that supports regulatory reporting.

  • Industry-specific regulations: Financial services, healthcare, and other regulated industries are developing AI-specific compliance requirements. A comprehensive AI-BOM positions organizations to adapt quickly as these regulations take effect.

Implementing cloud compliance tools alongside AI-BOMs lets organizations automate much of the necessary evidence collection for AI governance frameworks.

How to build an AI-BOM with Wiz

Developing an AI-BOM is straightforward with Wiz, which streamlines the process with automated discovery and continuous monitoring. Our cloud native application protection platform (CNAPP) enables AI-BOMs through these capabilities:

  • Automated discovery: Wiz automatically discovers AI services, model usage, and supporting infrastructure across cloud environments. This automation eliminates manual cataloging and keeps your AI-BOM current as teams experiment with and deploy new services.

  • Graph-based visibility: Wiz maps every model, dataset, identity, network path, and cloud resource into the Wiz Security Graph. This mapping allows security teams to see the AI that exists, how it behaves in production, what it can access, and where real risk accumulates.

  • Policy enforcement: Teams can embed automated checks into pipelines to enforce compliance gates. Setting up policy as code rules automatically validates AI components before they reach production.

  • Drift detection: Wiz tracks component behavior to detect unauthorized changes or configuration drift. This tracking is critical because even minor, unapproved changes to models or dependencies can introduce vulnerabilities or break compliance. 

  • Integration with workflows: Wiz connects with CI/CD pipelines and remediation workflows to make your AI-BOM actionable. When it detects issues, automated workflows trigger remediation or alert the appropriate teams.

Building our AI-BOM on agentless cloud visibility means it continuously updates as teams deploy new services or introduce AI agents and tools. As a result, security teams detect shadow AI early, enforce guardrails consistently, and prioritize AI risk using the same context-driven workflows they already use for cloud security.

Choosing Wiz for AI-BOM

Wiz treats AI-BOM as a security-first system of record rather than a static checklist. You can automatically discover and map AI components across your cloud environment, eliminating manual tracking.

The Wiz Security Graph makes AI-BOM actionable by connecting every component. Security teams see what AI exists, how it behaves, what it can access, and where risk accumulates. AI-BOM becomes the foundation for evaluating exposure, blast radius, and ownership as AI systems evolve. Wiz integrates AI assets into a unified view that includes vulnerabilities, misconfigurations, identities, and data so you can secure AI at the speed you build it.

Ready to see our AI capabilities in action? Request a demo today to experience how Wiz streamlines AI security operations from code to cloud. And to understand how Wiz reports on AI security risk, explore our AI Security Assessment Sample Report.


FAQ about AI-BOM

Below are some common questions about AI-BOMs: