Choosing an AI-SPM tool: The four questions every security organization needs to ask

Ensure you are staying secure as your organization adopts AI by following these four guiding questions

4 minutes read

AI has introduced a new era of innovation, empowering organizations to create cutting-edge applications. We have seen many of our customers already adopt AI services into their cloud environments and our Research Team found that over 70% of cloud environments are already using AI today. With this rapid adoption of AI, security teams are concerned about the security implications of introducing AI into their cloud environment and the increased attack surface. We keep on hearing security organizations asking: “How can we ingrain security into AI processes in my organization?”. 

AI adoption today mirrors where cloud was several years ago: almost everyone is using it, but very few organizations have a process in place to govern it. But unlike the cloud revolution, AI security is being developed side by side with new technology. To help our customers address this AI security challenge we came up with four important questions we think every security organization should be asking itself: 

  1. Does my organization know what AI services and technologies are running in my environment? 

  2. Do I know the AI risks in my environment? 

  3. Can I prioritize the critical AI risks? 

  4. Can I detect a misuse in my AI Pipelines? 

The goal for this exercise is to help security teams stay confident they are leveraging AI while keeping their environment secure and help accelerate AI adoption. 

Let’s dive into what it means to be able to answer yes for each one of these question: 

  1. Does my organization know what AI is running in my environment? 

    Cloud providers are constantly releasing new AI services, and developers and data scientists are quick to experiment with the new technologies. As developers start innovating with AI, can your security team easily answer what AI models, SDKs, or services are running in the environment? Can you detect what data stores are used for training data? Or the tools and workloads used to train the model or systems hosting the inference? If the answer for any of these is no, you are not alone, and many organizations face shadow-AI in early AI adoption. But if your team doesn’t know what AI models and pipelines are in the environment, how can they effectively secure it? At Wiz, we look at visibility as the core enabler of secure AI adoption. Your security team should not rely on developers and data scientists to flag down what AI services they enabled but rather be able to detect in near real time the complete AI-BOM including managed and self-hosted AI services, SDKs, libraries, fine tuning jobs and more are introduced into your environment.  

  2. Do I know the AI risks in my environment? 

    Your security team successfully gained visibility into AI components, now what? The next question your team should be asking itself is: what new risks are introduced into my environment by adopting AI? How does my attack surface change? AI model training involves storing large datasets and requires your team to ensure you are protected against data risks such as data leakage. A real-life example of a training data risk is the recent discovery by our research team. Wiz Research found 38TB of data that was accidentally exposed by Microsoft AI researchers, including sensitive data such as exposed secrets, private keys, and passwords. It is not only data risks introduced into the environment, but security teams also need to protect against model risks such as model poisoning, where an attacker gains access to the training data and poisons it with malicious data to change the model’s output. In addition, similar risks exist in the AI pipelines as there are in the cloud: vulnerabilities, misconfigurations, network exposures, excessive permissions. Organizations innovating with AI need to have visibility across all these types of risks to ensure security in their AI pipelines. 

  3. Can I prioritize the critical AI risks? 

    You detected all the risks in my environment, how will your team prioritize one AI misconfiguration or vulnerability over the other? To effectively prioritize AI risks, organizations need to understand the full context across cloud and workload. For example, you can detect an AI misconfiguration where a notebook instance that has root access enabled, but if now you also know that the training data used by the instance contains sensitive data, it becomes a critical risk you need to prioritize. That is why it important to have a dashboard acting as the single pane of glass across all risks in the AI pipeline, allowing you to understand how risks come together to result in an attack path in the environment. The dashboard should provide accurate risk prioritization to allow your security teams and AI teams to effectively remove most critical risk, so they can focus on further AI innovation. 

  4. Can I detect a misuse in my AI Pipelines? 

    You’ve proactively detected and removed critical risk in the environment, are you completely protected against AI threats? Similarly to protecting your cloud environment, you want to remove as much critical risk, but you might not be able to remove every single risk in your environment. AI also grabs threat actors' attention and that’s why you still need to ensure you are prepared for any threats that might come up. Security teams should be able to detect suspicious activity in AI pipelines, such as an external user misusing the AI model, or planting a malicious model in real time so they can respond quickly, reduce blast radius, and remove the threat. 

Security teams need to ask these questions as their organization adopts AI into their environment and can use these as guiding principles when searching for an AI-SPM solution. An AI-SPM tool that allows you to answer yes to these questions provides you with the core AI security capabilities for protecting your AI-pipelines. At Wiz, we want to make sure our customers can keep up with the pace of AI innovation and that is why we released our AI-SPM capabilities. Wiz AI-SPM provides our customers with full visibility into their AI pipelines with agentless AI-BOM capabilities, and extends our attack path analysis to AI so they can proactively remove critical AI risks. You can learn more by visiting the Wiz for AI webpage. If you prefer a live demo, we would love to connect with you. 

 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management