AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

AI Security Explained: How to Secure AI

AI is the engine behind modern development processes, workload automation, and big data analytics. AI security is a key component of enterprise cybersecurity that focuses on defending AI infrastructure from cyberattacks.

Wiz Experts Team
8 minutes read

What is AI security? 

AI security is a key component of enterprise cybersecurity that focuses on defending AI infrastructure from cyberattacks. Focusing on AI security is vital because numerous AI technologies are woven into the fabric of organizations. AI is the engine behind modern development processes, workload automation, and big data analytics. It’s also increasingly becoming an integral component of many products and services. For example, a banking app provides financial services, but AI-powered technologies like chatbots and virtual assistants within these apps provide an X factor.

The global AI infrastructure market is forecast to reach more than $96 billion by 2027. According to McKinsey, there was a 250% rise in AI adoption from 2017 to 2022, and the most prominent use cases included service operations optimization, creation of new AI-based products, customer service analytics, and customer segmentation. Unfortunately, every single one of these AI use cases is susceptible to cyberattacks and other vulnerabilities. 

That’s just a tip of the iceberg. Data engineers and other agile teams leverage GenAI solutions like large language models (LLMs) to develop applications at speed and scale. Many cloud service providers (CSPs) offer AI services to support this development. You may have heard of or used AI services like Azure Cognitive Services, Amazon Bedrock, and GCP’s Vertex AI. While such services and technologies empower teams to develop and deploy AI applications faster, these pipelines introduce numerous risks. The bottom line is that AI is not quite as secure as many believe, and it requires robust fortifications. 

How (un)secure is artificial intelligence? 

The narrative surrounding AI often focuses on ethics and the possibility of AI replacing human workforces. However, Forrester claims that the 11 million jobs in the US that will be replaced by AI by 2032 will be balanced by other new work opportunities. The relatively overlooked complexity is at the crossroads of AI and cybersecurity. Threat actors leverage AI to dispense malware and infect code and datasets. AI vulnerabilities are a common vector for data breaches, and software development lifecycles (SDLCs) that incorporate AI are increasingly susceptible to vulnerabilities. 

GenAI, in particular, poses many risks. The dangerous potential of GenAI is seen in tools like WormGPT, which is similar to ChatGPT but with a focus on conducting criminal activity. Luckily, the application of AI in cybersecurity is being used to ward off such threats with ChatGPT security evolving. The AI in cybersecurity market will reach $60.6 billion by 2028, proving that human security teams will struggle to identify and remediate large-scale cyberattacks facilitated by AI without utilizing AI themselves. 

Cybersecurity AI will continue to play a large role in combating AI-powered security threats. It’s important because threat actors will use LLM prompts as a vector to manipulate GenAI models to reveal sensitive information. CSPs are likely to fully embrace the AI revolution soon, which means that significant infrastructure and development-related decisions will be facilitated by AI chatbots. The use of chatbots as weapons (like WormGPT or FraudGPT) suggests that companies will have a lot of unpredictable AI-related cybersecurity challenges to reckon with. 

It’s important to remember that AI can be secured. However, it’s not inherently secure.

AI security risks

The best way to tackle AI security is to thoroughly understand the risks. Let’s take a look at the biggest AI security risks

Increased attack surface

The integration of AI, such as GenAI, into SDLCs fundamentally changes an enterprise's IT infrastructure and introduces many unknown risks. This is essentially a broadening of the attack surface. The overarching security challenge of AI is to ensure that all AI infrastructure is under the stewardship of security teams. Complete visibility of AI infrastructure can help remediate vulnerabilities, reduce risks, and limit your attack surface. 

Higher likelihood of data breaches and leaks

The risks of a broader attack surface include downtime, disruption, profit losses, reputational damage, and other major long-term consequences. According to The Independent, 43 million sensitive records were compromised in just August 2023 alone. Suboptimal AI security can compromise your crown jewels and add you to the lists of data breach victims. 

Chatbot credential theft

Stolen ChatGPT and other chatbot credentials are the new hot commodity in illegal marketplaces on the dark web. More than 100,000 ChatGPT accounts were compromised between 2022 and 2023, highlighting a dangerous AI security risk that's likely to increase.

Vulnerable development pipelines

Security Risk in the AI Pipeline

AI pipelines tend to broaden the vulnerability spectrum. For instance, the realm of data science, encompassing data and model engineering, often operates beyond traditional application development boundaries, leading to novel security risks.

The process of gathering, processing, and storing data is fundamental in the domain of machine learning engineering. Integrating with model engineering tasks demands robust security protocols to protect data from breaches, intellectual property theft, supply chain attacks, and data manipulation or poisoning. Ensuring data integrity is pivotal in reducing both deliberate and accidental data discrepancies.

Data poisoning

Data poisoning is the manipulation of GenAI models. It involves inputting malicious datasets to influence outcomes and create biases. The Trojan Puzzle, an attack designed by researchers, is an example of how threat actors might be able to influence and infect datasets that a GenAI model learns from to choreograph malicious payloads. 

Direct prompt injections

Direct prompt injections are a type of attack where threat actors deliberately design LLM prompts intending to compromise or exfiltrate sensitive data. There are numerous risks associated with direct prompt injection, including malicious code execution and the exposure of sensitive data.

Indirect prompt injections

An indirect prompt injection is when a threat actor shepherds a GenAI model toward an untrusted data source to influence and manipulate its actions. This external, untrusted source can be custom-designed by threat actors to deliberately induce certain actions and influence payloads. Repercussions of indirect prompt injections include malicious code execution, data leaks, and provisioning end users with misinformation and malicious information.

Hallucination abuse

AI has always been prone to hallucinating certain information, and innovators around the world are working to reduce the magnitude of hallucination. But until they do, AI hallucinations continue to pose significant cybersecurity risks. Threat actors are beginning to register and "legitimize" potential AI hallucinations so that end users receive information that’s influenced by malicious and illegitimate datasets.

AI security frameworks and standards

Now that you know the biggest AI security risks, let’s take a brief look at how enterprises can mitigate them. Cybersecurity frameworks have long been a powerful tool for enterprises to protect themselves from rising threats, and these AI security frameworks provide a consistent set of standards and best practices to remediate security threats and vulnerabilities:

  • NIST’s Artificial Intelligence Risk Management framework breaks down AI security into four primary functions: govern, map, measure, and manage.

  • Mitre’s Sensible Regulatory Framework for AI Security and ATLAS Matrix anatomize attack tactics and propose certain AI regulations.

  • OWASP’s Top 10 for LLMs identifies and proposes standards to protect the most critical vulnerabilities associated with LLMs, such as prompt injections, supply chain vulnerabilities, and model theft.

  • Google’s Secure AI Framework offers a six-step process to mitigate the challenges associated with AI systems. These include automated cybersecurity fortifications and AI risk-based management.

  • Our own PEACH framework emphasizes tenant isolation via privilege hardening, encryption hardening, authentication hardening, connectivity hardening, and hygiene (P.E.A.C.H.). Tenant isolation is a design principle that breaks down your cloud environments into granular segments with tight boundaries and stringent access controls. 

A few simple AI security recommendations and best practices

The key to protecting your AI infrastructure is framing and following a set of best practices. Here are 10 of our own to get you started: 

1. Choose a tenant isolation framework

The PEACH tenant isolation framework was designed for cloud applications, but the same principles apply to AI security. Tenant isolation is a powerful way to combat the complexities of GenAI integration.

2. Customize your GenAI architecture

Your GenAI architecture needs to be carefully customized to ensure that all components have optimized security boundaries. Some components may need shared security boundaries, others may need dedicated boundaries, and for some, it may depend on various contexts. 

3. Evaluate GenAI contours and complexities

Mapping the implications of integrating GenAI into your organization’s products, services, and processes is a must. Some important considerations are that your AI models’ responses to end users are private, accurate, and constructed with legitimate datasets.

4. Don’t neglect traditional cloud-agnostic vulnerabilities

Remember that GenAI is no different from other multi-tenant applications. It can still suffer from traditional challenges like API vulnerabilities and data leaks. Ensure that your organization doesn’t neglect overarching cloud vulnerabilities in its quest to mitigate AI-specific challenges. 

5. Ensure effective and efficient sandboxing

Sandboxing involves taking applications that incorporate GenAI to isolated test environments and putting them under the scanner, and it’s a powerful practice to mitigate AI vulnerabilities. Make sure that your sandboxing environments are optimally configured, though. Suboptimal sandbox environments and processes built in a rush can exacerbate AI security vulnerabilities.

6. Conduct isolation reviews

A tenant isolation review provides a comprehensive topology of customer-facing interfaces and internal security boundaries. This can help identify AI security vulnerabilities and further optimize tenant isolation to prevent cybersecurity incidents.

7. Prioritize input sanitization

Establish certain limitations on user input in GenAI systems to mitigate AI security vulnerabilities. These limitations don’t have to be ultra-complicated. For example, you can replace textboxes with dropdown menus with limited input options. The biggest challenge with input sanitization will be to find a balance between robust security and a smooth end-user experience.

8. Optimize prompt handling

Prompt handling is vital in applications that incorporate GenAI. Businesses need to monitor and log end-user prompts and red flag any prompts that seem suspicious. For example, if a prompt shows any signs of malicious code execution, it should be red flagged and addressed. 

9. Understand the security implications of customer feedback

This may be seen as a relatively low-risk AI security challenge, but your AI security posture and practices shouldn’t have any cracks. The fact is that a feedback textbox can allow threat actors to introduce malicious content into an application that incorporates GenAI. A simple best practice is to replace free-text feedback options with dropdown fields. 

10. Work with reputable AI security experts

AI is going to be central to the next chapter of tech advancements. That’s why AI security is critical and can’t be treated as an afterthought. Working with reputable and highly qualified cloud security experts is the best way to strengthen your AI and cybersecurity posture.  

Securing AI with Wiz

Wiz is the first CNAPP to offer native AI security capabilities fully integrated into the platform. Wiz for AI Security introduces the following new capabilities:

  • AI Security Posture Management (AI-SPM): Gives security teams and AI developers visibility into their AI pipelines by identifying every resource and technology in the AI pipeline, without any agents

  • Extending DSPM to AI: Automatically detects sensitive training data and helps you ensure it is secure, with new out-of-the-box DSPM AI controls

  • Extending Attack Path Analysis to AI: Full cloud and workload context around AI pipeline helping organizations proactively remove attack paths in the environment

  • AI Security Dashboard: Provides an overview of the top AI security issues with prioritized queue of risks so developers can quickly focus on the most critical one

Wiz also offers AI-security support for Amazon SageMaker and Vertex AI users that can help monitor and mitigate the security risks associated with managing AI/ML models. Wiz’s customized features for Vertex AI and Amazon SageMaker integrations include robust sandboxing environments, complete visibility across cloud applications, the safeguarding of AI pipelines, and agile deployment of ML models into production. Get a demo to explore how you can leverage the full capabilities of AI without worrying about security. 

Wiz is also proud to be a founding member of the Coalition for Secure AI. By joining forces with other pioneers in the field, Wiz is committed to advancing the coalition's mission of secure and ethical AI development. As a founding member, Wiz plays a crucial role in shaping the coalition's strategic direction, contributing to policy development, and promoting innovative solutions that enhance the security and integrity of AI technologies.

Develop AI applications securely

Learn why CISOs at the fastest growing organizations choose Wiz to secure their organization's AI infrastructure.

Get a demo

FAQs

Continue reading

What is Cloud Data Security? Risks and Best Practices

Wiz Experts Team

Cloud data security is the comprehensive strategy of preventing data loss or leakage in the cloud from security threats like unauthorized access, data breaches, and insider threats.

Effective Permissions: A Security Review

Wiz Experts Team

In this article, we will explore the challenges of managing permissions, the risks associated with improper access controls, and how major cloud providers handle permissions. We’ll also take a look at best practices and advanced solutions like cloud infrastructure entitlement management (CIEM).

Source Code Leaks: Risks, Examples, and Prevention

Wiz Experts Team

In this blog post, we’ll explore security measures and continuous monitoring strategies to prevent these leaks, mitigating the risks posed by security vulnerabilities, human error, and attacks.

What is Cloud Risk Management?

Wiz Experts Team

In this article, we’ll explore what cloud risk management entails and take an in-depth look at the tools that can keep your systems safe.