What is the AI Bill of Rights?
The AI Bill of Rights is a non-binding framework published by the White House Office of Science and Technology Policy (OSTP) in October 2022. It provides guidance on designing and deploying automated systems that protect civil rights, privacy, and safety.
It emerged alongside other major AI governance efforts, including the EU AI Act and the NIST AI Risk Management Framework. But unlike the EU AI Act, which includes penalties for non-compliance, the AI Bill of Rights is voluntary. Still, its principles are increasingly reflected in federal procurement requirements and regulatory enforcement.
Shaped by input from technology companies, researchers, civil society groups, and the public, the framework centers on five principles: safe systems, discrimination protections, data privacy, transparency, and human alternatives. Together, these principles aim to support AI adoption without sacrificing individual rights or public trust.
25 AI Agents. 257 Real Attacks. Who Wins?
From zero-day discovery to cloud privilege escalation, we tested 25 agent-model combinations on 257 real-world offensive security challenges. The results might surprise you 👀

What automated systems does the AI Bill of Rights apply to?
The AI Bill of Rights applies to automated systems that meaningfully impact individuals' rights, opportunities, or access to critical resources. The OSTP defines automated systems as any technology using computation to make decisions, recommendations, or predictions.
That broad scope encompasses both customer-facing products and internal-use tools, ensuring that AI-powered technology used to determine an employee’s internal promotion, for example, receives the same scrutiny as a public-facing credit scoring model.
The framework targets high-stakes sectors and applications where errors or biases could deny fundamental opportunities. Covered systems include:
Hiring and employment: Resume screening algorithms, candidate ranking tools, and workforce analytics
Financial services: Credit scoring models, loan approval systems, and fraud detection
Healthcare: Diagnostic AI, treatment recommendation engines, and patient triage systems
Public services: Benefits eligibility determination, voting systems, and surveillance technologies
Education: Plagiarism detection, automated grading, and admissions screening
The May 2024 ACLU complaint filed with the Federal Trade Commission against the hiring service vendor Aon illustrates the discriminatory use of AI that the AI Bill of Rights seeks to eliminate. The filing reinforced the framework’s goal of mitigating systems that undermine equity or violate individual privacy.
What are the key principles of the AI Bill of Rights?
Navigating the AI Bill of Rights becomes straightforward when you focus on the five core principles that drive responsible development. These key pillars provide a clear roadmap for governance and security teams to follow as they deploy automated systems.
1. Safe and effective systems
Safe and effective systems require you to uncover potential AI security risks, ethical concerns, and operational failure points. Building these systems mandates rigorous pre-deployment testing, continuous monitoring, and independent evaluation.
Compliance includes regularly scheduled AI red-teaming exercises and security testing to catch vulnerabilities before they impact production environments or user safety.
2. Algorithmic discrimination protections
Algorithmic discrimination protections ensure that automated systems don’t produce inequitable outcomes based on protected characteristics. Regulators, including the FTC and EEOC, increasingly prioritize these protections to mitigate AI security risks and avoid ethical hurdles.
Compliance mandates proactive equity assessments to uncover hidden disparities. These measures protect people from AI-enabled discrimination and shield organizations from federal violations.
3. Data privacy
Gartner research revealed that 42% of IT leaders rank GenAI-related data privacy as their top concern. The AI Bill of Rights framework upholds three data privacy requirements: consent, data minimization, and user control.
Compliance requires respecting individual decisions on how automated systems manage, store, collect, delete, and process data. Establishing training data governance mitigates AI security risks while preventing deceptive tactics, such as retroactively updating terms of service to expand data sharing.
4. Notice and explanation
Notice and explanation mandate transparency by requiring organizations to tell users when an automated system is in use. Teams must disclose the system’s role, explain how AI informs decisions, and provide clear, plain-language documentation.
Using accessible, jargon-free language helps people understand the automated logic affecting them. These requirements align with emerging state transparency laws, including the Colorado AI Act.
5. Human alternatives, consideration, and fallback
The final principle of the AI Bill of Rights gives individuals a way to opt out of automated systems and engage a human decision-maker. To support this right, organizations need personnel who can review and correct automated outcomes when necessary. Customer service chatbots with immediate human escalation paths illustrate this principle in practice. A human review layer strengthens accountability and helps reduce AI security risks.
How can organizations benefit from following the AI Bill of Rights?
Aligning with the AI Bill of Rights principles secures practical advantages beyond ethical positioning. Adopting these standards helps practitioners cut technical debt, lower legal friction, and mitigate AI security risks. Additional advantages include:
Increased trust: Customers, partners, and regulators frequently evaluate organizations based on their AI integrity. Proving alignment with recognized governance principles signals responsibility and builds credibility with stakeholders monitoring your automated systems. Unwavering transparency shields brand reputation and fortifies public trust against potential AI security risks.
Stronger compliance posture: AI introduces new compliance hurdles for existing regulations such as GDPR, CCPA, and HIPAA. The framework's principles provide a foundation that aligns with multiple regulatory requirements, minimizing the effort required to prove AI compliance as formal laws emerge. Early alignment positions organizations ahead of mandatory compliance, such as proactive GDPR preparation for global enterprises.
Reduced risk exposure: Proactively addressing algorithmic bias, data privacy, and transparency gaps lowers the likelihood of enforcement actions, litigation, or reputational damage. Organizations that wait until mandatory requirements are in place often face higher remediation costs than those that build responsible practices early. Building governance today shields your bottom line from the financial shocks associated with retroactive fixes.
Want a peek at how expensive AI compliance failures become? Garante, Italy’s data protection authority, fined the city of Trento 50,000 euros in 2024 for violating privacy rules regarding AI-powered street surveillance.
Get a Sample AI Security Report
In this Sample Assessment Report, you’ll get a peek behind the curtain to see what an AI Security Assessment should look like.

What challenges did the AI Bill of Rights introduce?
The AI Bill of Rights introduced practical challenges for organizations already navigating complex compliance environments. Criticisms frequently focus on the lack of enforcement mechanisms and the vagueness of implementation guidance, as well as innovation, as teams hesitate to deploy under uncertain expectations.
Key challenges include:
Vague implementation guidance: Because the AI Bill of Rights is voluntary and high-level, organizations often struggle to translate its principles into concrete security and governance controls.
Regulatory overlap and compliance uncertainty: Teams must map the framework against existing federal, state, and industry-specific requirements, creating uncertainty about how voluntary guidance fits with binding obligations.
Limited visibility into AI assets, data flows, and access: Organizations cannot operationalize AI governance without a unified view of where AI is running, what data it touches, and who can access it.
Security gaps caused by poor AI inventory: When teams lack visibility into AI resources, they are more likely to miss vulnerabilities, misconfigurations, and overprivileged AI workloads that create compliance and security risks.
These challenges shape the broader debate over AI governance, especially as organizations weigh innovation speed against accountability, security, and regulatory pressure.
The debate over AI governance
The AI Bill of Rights remains in effect as voluntary guidance, even as the federal AI policy landscape—shaped by initiatives such as the U.S. AI Action Plan—continues to evolve.
In January 2025, the Removing Barriers to American Leadership in Artificial Intelligence executive order revoked Executive Order 14110, which previously mandated federal requirements for AI safety testing and reporting. With the new executive order, policymakers aimed to dismantle regulatory hurdles they identified as barriers to AI innovation, prioritizing development speed over centralized oversight.
Because the AI Bill of Rights never functioned as a binding regulation, the 2025 revocation left its status as published OSTP guidance intact. However, the policy shift decentralized federal enforcement of AI governance, leaving organizations to determine their own level of alignment with the framework's core principles. The new landscape highlights the constant tension between speedy innovation and safety.
Despite the federal pullback, the framework's principles continue to influence state legislation—with state legislatures introducing 385 AI measures during the 2025 legislative session alone. The FTC and international bodies continue to reference these standards to evaluate automated accountability.
How does Wiz help with AI compliance?
Wiz AI Security, including Wiz AI-APP and AI-SPM, gives organizations the technical foundation to operationalize AI compliance. For the AI Bill of Rights, that means supporting principles tied to data privacy, safe systems, and security visibility with end-to-end context across code, cloud, and runtime.
That broader AI-APP approach matters because modern AI risk rarely sits in one layer. Teams need to see how models, agents, MCP servers, guardrails, identities, and connected data stores relate to one another before they can turn governance requirements into technical enforcement.
Wiz helps by:
Discovering AI workloads across cloud environments
Flagging misconfigurations in AI pipelines
Detecting exposed training data
Monitoring overprivileged access to AI resources
Identifying attack paths that threaten model integrity or sensitive data
These capabilities help close a common governance gap: legal teams set AI policy, while development teams deploy models without centralized oversight. When organizations lack a unified inventory of AI assets and agentic components, compliance teams cannot see where sensitive data interacts with automated systems or how deployed behavior differs from design intent. Wiz bridges that gap by connecting code, cloud, runtime, models, and data in a single security graph, turning policy into actionable security controls.
That said, Wiz does not deliver full AI compliance on its own. Organizations still need governance, legal, and ML stakeholders to address explainability, bias, and regulatory alignment. Wiz secures the infrastructure and data layer, while those teams handle the broader ethical and compliance requirements.
Platforms like Wiz transform abstract protections into enforceable checks by continually connecting posture findings with code, cloud, and runtime context. Using AI-SPM within the broader AI-APP model helps teams validate exposed AI endpoints, insecure agent behavior, and attack paths to sensitive data while advancing responsible AI development.
Want to see how Wiz AI-SPM can help secure your AI stack? Get a demo today.