BlogWiz AI-SPM model scanning: Securely innovate with AI community models

Wiz AI-SPM model scanning: Securely innovate with AI community models

Detect malicious hosted AI models with Wiz AI-SPM and gain confidence in the models your data scientists use

5 minutes read

As organizations adopt AI, visibility remains a large challenge security teams face as they work to remove blind spots and detect every AI technology in their environment. While it is common for organizations to leverage managed AI services, our data shows that 42% of organizations still choose to self-host AI models. Wiz now supports identification and scanning of hosted AI models to provide organizations with visibility into their hosted models as well as detection of malicious models. This feature builds on the existing observability Wiz provides into managed AI models such as those available via OpenAI, Azure AI Services, Amazon Bedrock, Google Vertex AI and others. Customers can now see exactly where and how AI models are being used in their cloud environment, and remove any blind spots in their AI pipelines. 

Organizations that self-host their AI models have the benefit of being able to utilize a wide selection of open-source models that might not be available in SaaS solutions, but they also face unique challenges when it comes to securing their AI pipelines. These models are often downloaded directly from public, “world-writable” open-source repositories like Hugging Face and PyTorch Hub, meaning that developers, data scientists and AI practitioners form an implicit trust relationship between their organization and the contributors of the open-source models they use in their cloud environment. This introduces new supply chain risks which require compensating security controls. Just like organizations can leverage SBOM (Software Bill of Materials) to identify and address vulnerabilities across their software supply chain, security teams should extend this idea to address risks in AI models to ensure experimentation doesn’t introduce new threats. Today, we are excited to add support for malicious model detection, continuing our commitment to empowering our customers to securely innovate with AI. 

Visibility into hosted models 

Wiz’s AI Bill of Materials (AI-BOM) provides customers with visibility into their AI pipelines, including into their managed AI models and services and, starting today, empowers them to detect their hosted AI models across their cloud environment. Wiz now detects hosted models in formats such as PyTorch and Tensorflow, whether they’re sourced from Hugging Face or elsewhere, and whether they are running on a virtual machine or stored in a storage bucket. Security teams and data scientists can quickly gain visibility into whatever AI models are deployed in their environment no matter where they may be, and observe them through the Wiz Inventory or Wiz Security Graph, thereby mitigating the risk of blind spots and “Shadow AI” in their cloud security posture. 

 

Empower data scientists to securely leverage AI community models 

AI teams and data scientists regularly experiment with open-source models developed by the AI community, which makes it easier for organizations to rapidly advance AI development through open collaboration, knowledge sharing, and innovation. However, using models from an untrusted source leads to hidden risks that organizations need to protect against. 

One example of this risk is present in models that use pickle files. Pickle is a Python format commonly used for storing AI model weights. While the use of pickle files is widespread, the format is known for its potential security risks. By design, the pickle format allows arbitrary code execution, which means that an attacker could abuse this and craft a malicious AI model, which might be unwittingly downloaded by data scientists and then utilized in an organization’s AI pipelines.  

A real-life example of this risk manifesting in production was discovered by the Wiz Research Team when they found an architectural issue affecting Hugging Face Hub. The Research Team was able to upload a malicious pickle-formatted model to Hugging Face that granted them a reverse shell, which they were able to leverage to escape the container running the model, and then compromise the Hugging Face Inference API service, through which they could access other Hugging Face customers’ data.   

Another example discovered by the Wiz Research Team was a repository of models and training data stored in a misconfigured Azure storage instance operated by Microsoft. This misconfiguration allowed the repository to be writable by anonymous users, meaning that a threat actor could have stealthily replaced the models with malicious ones. Since they were formatted as pickle files, this could have allowed an attacker to achieve remote code execution on machines used by any data scientists who were sourcing their models from this repository. This shows the importance of maintaining an organization-wide inventory of self-hosted AI models, including detecting models in storage buckets, and enriching this data with contextual information about public exposure and anonymous access permissions. Besides these two examples, research by JFrog has also shown that malicious models are regularly uploaded to Hugging Face Hub by threat actors, which means that this is a real-world risk that customers may encounter in the wild. 

While the pickle format is known to be unsafe, it and other unsafe formats still see widespread use in the AI community. However, it is important for us to help our customers use such unsafe model formats in ways that safeguard their cloud environments against the risks they entail.  

Introducing malicious model scanning 

To help our customers secure their AI models, we are excited to introduce Wiz’s model scanning capabilities that detect security risks in unsafe model formats so organizations can quickly identify and remediate them. You can now detect hosted AI models in your environment and rely on our new scanning capabilities to alert you to security findings related to them, such as models that execute suspicious commands or attempt to connect to the Internet. Below is an example where Wiz detected an EC2 instance that is hosting PyTorch models that would execute malicious code if loaded. 

Detect real-time threats in your AI models 

Without observability, such malicious models might be loaded and execute their code before the security team has a chance to identify or remove them. That is why it is important for organizations to have a defense-in-depth approach so in case a threat occurs, they can detect it and remove it in real time. Wiz Sensor customers can benefit from runtime protection against suspicious behavior originating in AI models, so if a threat does materialize, they can catch it in time and minimize the blast radius. For example, here Wiz detected a connection that was initiated by an AI model to a domain associated with cryptomining activities, which might indicate that this model slipped through the organziation’s supply chain and is now facilitating malicious activity. Customers can not only mitigate risks associated with their hosted AI models, but also detect threats in runtime and ensure their cloud environments are protected from this supply chain risk. 

In summary, Wiz AI model scanning allows customers to: 

  • Ensure visibility into hosted AI models  

  • Monitor for unsafe models across their environment 

  • Secure AI deployments by quickly identifying and removing security threats 

Empower secure AI innovation 

Our goal is to help your organization increase AI innovation, securely. You can learn more about Wiz AI-SPM here and about model scanning by visiting our Wiz docs (login required). If you prefer a live demo, we would love to connect with you. 

 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management