Biden's AI Executive Order: What it says, and what it means for security teams

The 2023 Executive Order has far-reaching implications for companies relying on AI. Here is a breakdown of it through the lens of a Security Engineer, including an analysis, a summary of the impact on AI safety and privacy protection, and a look at how the order will affect security teams.

10 minutes read

The 2023 Executive Order on AI (order number 14110), issued by President Biden, has far-reaching implications for companies relying on AI. It establishes a new set of standards for AI safety and security, privacy protection, and equitable use. This is a significant moment for AI in America and its impact will be felt by privacy and security teams. In most organizations, protecting privacy falls on the shoulders of the security team, so security teams specifically need to understand what this order means for them.

Practical Explanations of Key Points 

First, we’ll go over the key points with a relevant quote and discuss the implications. Then we’ll cover the practical themes that run throughout the entire Executive Order. Let’s begin!

The All-Encompassing Directive

“Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.”

The section above is the most descriptive, tactical, and comprehensive for  security teams. Any organization developing AI features or systems will need to comply with the upcoming NIST standards, so they should immediately begin putting processes in place to do so. The wording implies that the standards will require testing the systems to ensure they are safe, secure, and trustworthy. As those terms are ambiguous until NIST unveils their AI standards, a best-effort start on the testing program is a step in the right direction.

The action item here is "extensive red-team testing." Red-teaming has become a widely-used term with a lot of definitions in the age of AI, so we'll break it down in the "testing" section below. 

If your company falls into any of the categories listed in the follow-up sentences of this directive, you'll need to comply with the respective governmental organizations' directions. The distinction is vague, but it appears that Critical Infrastructure as well as chemical, biological, radiological, nuclear, and cybersecurity risks will work with Homeland Security, while anything pertaining to "energy" in that list will work with the Department of Energy.

Privacy

"Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data."

This directive emphasizes the importance of privacy in the era of AI. It says that companies making AI applications must prioritize privacy-preserving techniques. For security teams, this means a significant shift in focus towards privacy-centric practices. These teams will need to model potential threats that could impact user privacy and develop strategies to mitigate these risks.

In practical terms, this could involve a review of AI systems to assess their privacy measures. Security teams may need to collaborate closely with their AI developers to understand the intricacies of their AI systems and identify potential privacy vulnerabilities. This may also mean staying up to date with the latest adversarial attacks that specifically target privacy and keeping track of the evolving definitions of what constitutes "private information."

Investing in privacy-related training might be necessary to equip security teams with the skills and knowledge necessary to tackle these new challenges. Along with that, a potential implication is the need for transparency. Security teams may have to push for communication with users of how their data is being used and protected, which could involve developing user-friendly privacy policies and consents.

The business benefits of implementing these practical strategies are significant. Beyond complying with the Executive Order, by prioritizing user privacy, companies will build stronger relationships with their customers due to their focus on trust and transparency.

Workforce 

"Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize."

This order addresses the impact of AI on the workforce. It's a clear recognition of the transformative power of AI and its potential to disrupt traditional work structures and processes. 

Security teams will need to investigate and test any software that uses AI to determine compensation and handle job applications or that affects workers' ability to organize.  This could mean digging into the ethical implications of using AI to monitor employee behavior (in case it could inhibit their ability to organize), or it could involve working with HR to develop policies for using AI in hiring or performance evaluation.

Data collection is another potential focus area. Security teams might need to ensure that any data collected by AI systems is done in a way that respects employees' privacy and is compliant with data protection laws. This could involve implementing robust data governance policies and procedures, and ensuring that all data collected is securely stored and appropriately used.

Besides testing AI systems, the job displacement, labor standards, workplace equity, health, and safety issues are outside the scope of the security team for most organizations. Overall, the impact of this section affects organizations less than the privacy and all-encompassing portions above.

Healthcare 

"Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI."

This portion of the Executive Order has significant implications for security teams at companies that are developing AI for use in healthcare. These teams will need to ensure that their AI systems are not introducing additional risks to their patients. This could involve implementing security measures to protect patient data and placing guardrails on AI systems, or it could mean working closely with healthcare providers to ensure that AI tools are used responsibly.

The healthcare industry is often lagging in advancements when it comes to cybersecurity, so it may be a significant challenge. Furthermore, the second sentence outlines a path for unsafe practices to be reported; this emphasizes how serious the administration is about enforcing the executive order in the healthcare industry.

Additionally, security teams may need to work closely with other stakeholders in the healthcare sector, including healthcare providers and regulatory bodies. This collaboration could involve sharing best practices or developing new standards for the use of AI in healthcare, so keeping up to date with current regulations and training staff will be important.

Criminal Justice

"Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI."

This directive is primarily aimed at the criminal justice system, but it has implications for any company using or developing AI for surveillance or predictive analytics. Security teams at these companies will need to ensure that their AI systems are not only secure but also fair. This could involve implementing new testing procedures to check for bias in those AI systems, or it could mean working with legal experts to ensure that AI tools are used in a way that respects people's constitutional rights.

The use of AI in surveillance and predictive analytics has been a contentious issue. While AI can enhance the capabilities of these systems, it also raises significant ethical and legal concerns. This directive from the executive order is a clear call to action for security teams to address these concerns head-on.

For companies using AI in surveillance, this could mean a thorough review of any system used to identify individuals or patterns of behavior. Bias in these systems can lead to unfair targeting of certain groups, infringing on their rights to privacy and potentially leading to legal repercussions. Security teams will need to work closely with AI developers and legal experts to ensure these systems are as unbiased as possible.

Predictive analytics, particularly in the context of the criminal justice system, also present significant challenges. These systems are often used to predict the likelihood of recidivism or to assist in sentencing decisions. However, if these systems are biased, they can lead to unjust outcomes. Security teams will need to implement rigorous testing procedures to identify and mitigate any bias in these systems.

Landlords and Contractors

"Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination."

This directive is particularly relevant if your company is involved in landlord-tenant issues or if it employs federal contractors. The aim here is to prevent AI from being used in a discriminatory way. This is a significant step towards ensuring fairness and equity in AI applications, particularly in areas where AI has the potential to significantly impact individuals' lives, such as housing and employment.

Similar to some previous sections, for security teams, this directive presents a unique challenge. It's not enough to ensure that the AI systems the company uses are secure; they must also be free from discriminatory bias. This is a complex task that goes beyond traditional security measures. It requires a good bit of understanding how AI models work, how they can inadvertently lead to discriminatory outcomes, and how to test for it. Many teams may need to hire — or work closely with — external data scientists to ensure that the AI models being used are being tested for bias. 

Furthermore, security teams may need to develop strategies to mitigate any identified biases. This could involve fine-tuning them, filtering output that is biased, or changing models altogether. It could also involve implementing safeguards such as setting thresholds for decision-making that ensure fairness.

Finally, security teams will need to ensure that these measures are not just implemented, but also maintained over time. This could involve regular audits and updates to the AI systems and ongoing training for staff to ensure that they are aware of the importance of fairness and non-discrimination in AI.

The Major Themes

The Executive Order represents a massive moment in the AI industry, particularly for security teams. It's a call to action that necessitates a shift in focus and a deeper understanding of the implications of AI within applications. The world is still adjusting to the breakthroughs in AI — even as adoption skyrockets — so while [RR7] the government may be gracious initially, this is a heavy lift for most companies. They need to get ahead of it. The major themes of how to comply with the Executive Order for most security teams are outlined below. 

Training

The Executive Order requires a wide variety of changes. There will be a great need for training in most organizations. Security teams will need to understand the nuances of how different AI systems and models work. They'll also have to understand how features like web search, document retrieval, and third-party plugins work. Additionally, they'll have to understand the different ways these features can be exploited. They'll need training on how prompt injection can be used by adversaries. This includes understanding what constitutes privacy, how to protect it, and how to defend against attacks that target privacy specifically.

Security teams will also need to understand the regulations their organizations are under and how to comply with them. The requirements and directives will likely change over time as society develops alongside AI functionality, so ongoing education will be necessary as well.

Knowledge Sharing

The Executive Order underscores the importance of staying informed. As we covered in the Training section, security teams need to understand what is required of them  and know who they're accountable to. Staying up to date with regulations will be critical. As the government announces new AI standards, security teams need to ensure they're compliant. This could involve regular reviews of the company's AI practices and procedures to ensure they align with the latest regulations.

Security teams also need to understand what their companies are building and how it's being used. This should include regular communication with the AI developers at their organizations — and other stakeholders — to stay updated on AI projects. The new features and systems should go into a continuous testing cycle for most organizations. And on the topic of testing...

Testing

Testing is a significant theme in the Executive Order. It states that building AI systems is not enough; but that these systems should be extensively tested to ensure they're safe, secure, and trustworthy. This involves implementing testing procedures, including red teaming, to identify potential vulnerabilities.

Red teaming is a form of testing where a group of security professionals, known as the red team, mimic potential attackers to find vulnerabilities in a system. In the context of AI, this usually means attempting to get the AI models to return content that they shouldn't, in the form of output that is biased, explicit, or harmful. Thankfully, it seems that the industry is adopting the practice of simulating adversarial attacks on AI systems to test their robustness and security.

Since testing extends to fairness and non-discrimination, security teams need to ensure that their AI systems are fair. This isn't something security teams have focused on in the past, so there will likely be a learning curve. It may mean implementing new testing procedures to check for bias in an automated fashion.

Collaboration

The Executive Order highlights the need for collaboration and transparency. Security teams will need to work closely with AI developers, industry providers, regulatory bodies, and other stakeholders. This collaboration could involve sharing best practices, developing new standards, or working together to address shared challenges. Collaboration requires good communication, so having a policy outlining the best communication channel and guidelines for response times would be beneficial.

Conclusion

The Executive Order issued by President Biden represents a significant shift in the way AI is regulated in America. For security teams at companies using AI, it presents a range of new challenges and opportunities. AI unlocks tremendous innovation, and it also requires security teams to adapt their systems and processes so they can secure the AI pipelines and protect against AI misconfigurations and vulnerabilities. By understanding the implications of these directives, security teams can ensure that their use of AI is not only secure but also ethical and compliant with the new standards.

See for yourself...

Learn what makes Wiz the platform to enable your cloud security operation

Get a demo

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management