Wiz Defend è qui: rilevamento e risposta alle minacce per il cloud

Wiz extends its AI-SPM offering to OpenAI platform

Wiz becomes the first CNAPP to provide AI security for OpenAI, allowing data scientists and developers to detect and mitigate risk in their OpenAI organization with a new OpenAI SaaS connector.

4 minuti letti

We recently announced AI Security Posture Management (AI-SPM), which provides AI security capabilities that empower AI developers and data scientists to build with AI while staying protected against AI-related risks. So far, we’ve released in-depth support for cloud AI services, including Amazon SageMaker, Google Cloud Vertex AI, Azure AI Services, and Amazon Bedrock.

Today, we are excited to announce the launch of the OpenAI SaaS connector that extends Wiz AI-SPM to support the OpenAI API Platform, making Wiz the first CNAPP to provide AI security for OpenAI customers. This is an addition to the AI-SPM coverage Wiz already provides for organizations building with Azure OpenAI Service — so no matter where you choose to build with OpenAI, Wiz has you covered. With this launch, organizations can gain visibility into their OpenAI pipelines and proactively mitigate the most critical risks across cloud and OpenAI on the Wiz Security Graph. This empowers them with the confidence that they’re staying secure as they build and innovate with genertive AI. 

OpenAI’s ChatGPT made history by achieving an unprecedented adoption rate, reaching 1 million users within only 5 days of its launch. Many organizations quickly saw the benefits of generative AI and were searching for more ways to apply it to their industry-specific use cases. Users asked for ways to extend and secure ChatGPT to be tailored to their unique business needs. To accommodate this, OpenAI introduced a developer platform with a set of capabilities, such as fine-tuning jobs and Assistants that enable users to customize OpenAI base models to their specific business requirements. Fine-tuning enables users to customize the results of an existing base model by training it on data specific to a certain task. The Assistant API empowers developers to build their own AI apps that perform specific tasks unique to their business.  

Organizations leveraging these capabilities need to treat the OpenAI platform as they would any other AI service in the cloud: they need to understand the nuances of its users, data, services, and pipelines. That's why having a graph to represent OpenAI relationships is important. These organizations need to gain consistent visibility into their components, and then detect any new risks that are introduced, such as sensitive data being used to train models and misconfigurations that might expose models to unintended users. OpenAI platform outputs highly interact and are integrated with organizations' cloud infrastructure, which is why we are excited to add support for AI-SPM for OpenAI. 

Visibility into OpenAI pipelines 

AI security starts with visibility as its foundation. AI-SPM for OpenAI provides data scientists with an AI-BOM (short for AI bill of materials) of their OpenAI, i.e., the models in their environment, the Assistants they’ve built, and their fine-tuning jobs. In addition, they get visibility into the users in their OpenAI organizations and the training data being used. All of these are mapped on the Wiz Security Graph, providing immediate visibility with a simple UI that makes it easy for data scientists to understand their AI pipelines. Security teams also gain immediate visibility into any new training jobs their AI developers create, providing a single pane of glass regardless of where the job runs. 

Risk assessment of OpenAI pipelines 

Since Wiz has the full cloud context, it allows us to detect complex risk that spans across cloud and OpenAI. With this launch, Wiz is extending our attack path analysis to the OpenAI SaaS environment to empower AI developers and data scientists to mitigate critical risks with context. You can now detect risks such as exposed secrets, sensitive data, misconfigurations, and excessive permissions with Wiz’s risk assessment for OpenAI. Wiz then correlates all of these risks based on your cloud and workload context and maps them all on the Wiz Security Graph to allow you to identify toxic combinations and lateral movement paths that pose a critical risk to your OpenAI environment. 

For example, fine-tuning a model requires you to provide it with a training dataset that might accidentally include data considered sensitive to your organization. To ensure that your data is protected, Wiz detects sensitive training data or secrets in training datasets or Assistant files, so you can sanitize the data or rotate secrets as necessary. Moreover, Wiz’s attack path analysis detects complex risks, such as a fine-tuned model that was trained on a dataset containing secret data that grants permissions to an AWS IAM user, enabling that user to move laterally and assume an admin role. Training data containing secrets with such high permissions can pose an immediate threat to the organization's entire cloud infrastructure. 

Similarly, Wiz can also identify risks to your OpenAI environment that originate in your cloud environment. For example, a publicly exposed and vulnerable container in your cloud environment might contain an OpenAI API key, thereby putting your OpenAI environment at risk. By alerting you to this risk, Wiz can help you rotate the API key before threat actors get the chance to steal it and compromise your OpenAI subscription. 

Graph-based context for democratization 

With the Wiz Security Graph, security organizations who aren’t well-versed in the nuances of OpenAI can immediately understand their OpenAI risks and attack paths. The Security Graph removes the complexity of learning security nuances for each generative AI platform by normalizing AI pipelines no matter what platform they run on. At the same time, data scientists who are not security experts can immediately understand risks in their pipelines with an easy-to-understand visualization on the graph and accurate risk prioritization. This helps scale security to new teams, as data scientists can now own security for their AI pipelines. This improves the trust between security and data science teams. This increased trust and democratization of generative AI security practices across teams empowers them to work together to bring new innovations to market quickly. 

Start securely innovating with OpenAI today 

At Wiz, our goal is to empower our customers to securely accelerate AI innovation regardless of the service used or where it's hosted, whether it's in a cloud service provider or a vendor-specific cloud. Customers can now use Wiz to detect risks in their OpenAI SaaS solution. By combining OpenAI context with cloud context, organizations can effectively remove critical attack paths from the cloud to their AI models and vice-versa, and instead focus on developing and innovating with generative AI. Wiz customers can now navigate to the connectors page and deploy the new OpenAI connector. Learn more by visiting the Wiz Docs (login required). If you prefer a live demo, we would love to connect with you. 

Continua a leggere

Richiedi una demo personalizzata

Pronti a vedere Wiz in azione?

“La migliore esperienza utente che abbia mai visto offre piena visibilità ai carichi di lavoro cloud.”
David EstlickCISO (CISO)
“Wiz fornisce un unico pannello di controllo per vedere cosa sta succedendo nei nostri ambienti cloud.”
Adam FletcherResponsabile della sicurezza
“Sappiamo che se Wiz identifica qualcosa come critico, in realtà lo è.”
Greg PoniatowskiResponsabile della gestione delle minacce e delle vulnerabilità