AI Security

Learn how to secure AI models and the cloud systems that support them. These articles explore emerging risks, evolving attack techniques, and the safeguards teams use to protect models, pipelines, and inference workflows — while also showing how AI can boost core security operations.

AI-BOM: Building an AI Bill of Materials

An AI bill of materials (AI-BOM) is a complete inventory of all the assets in your organization’s AI ecosystem. It documents datasets, models, software, hardware, and dependencies across the entire lifecycle of AI systems—from initial development to deployment and monitoring.

Dark AI Explained

Team di esperti Wiz

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools.

Generative AI Security: Risks & Best Practices

Team di esperti Wiz

Generative AI (GenAI) security is an area of enterprise cybersecurity that zeroes in on the risks and threats posed by GenAI applications. To reduce your GenAI attack surface, you need a mix of technical controls, policies, teams, and AI security tools.

LLM Security for Enterprises: Risks and Best Practices

Team di esperti Wiz

LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.

What is a Prompt Injection Attack?

Prompt injection attacks are an AI security threat where an attacker manipulates the input prompt in natural language processing (NLP) systems to influence the system’s output.

AI Security Solutions in 2025: Tools to secure AI

Team di esperti Wiz

In this guide, we'll help you navigate the rapidly evolving landscape of AI security best practices and show how AI security posture management (AI-SPM) acts as the foundation for scalable, proactive AI risk management.

AI-Powered SecOps: A Brief Explainer

Team di esperti Wiz

In this article, we’ll discuss the benefits of AI-powered SecOps, explore its game-changing impact across various SOC tiers, and look at emerging trends reshaping the cybersecurity landscape.

AI Threat Detection Explained

AI threat detection uses advanced analytics and AI methodologies such as deep learning (DL) and natural language processing (NLP) to assess system behavior, identify abnormalities and potential attack paths, and prioritize threats in real time.

What is AI Red Teaming?

Team di esperti Wiz

Traditional security testing isn’t enough to deal with AI's expanded and complex attack surface. That’s why AI red teaming—a practice that actively simulates adversarial attacks in real-world conditions—is emerging as a critical component in modern AI security strategies and a key contributor to the AI cybersecurity market growth.

The role of Kubernetes in AI/ML development

In this blog post, you’ll discover how Kubernetes plays a crucial role in AI/ML development. We’ll explore containerization’s benefits, practical use cases, and day-to-day challenges, as well as how Kubernetes security can protect your data and models while mitigating potential risks.

AI/ML in Kubernetes Best Practices: The Essentials

Our goal with this article is to share the best practices for running complex AI tasks on Kubernetes. We'll talk about scaling, scheduling, security, resource management, and other elements that matter to seasoned platform engineers and folks just stepping into machine learning in Kubernetes.

The AI Bill of Rights Explained

Team di esperti Wiz

The AI Bill of Rights is a framework for developing and using artificial intelligence (AI) technologies in a way that puts people's basic civil rights first.

NIST AI Risk Management Framework: A tl;dr

Team di esperti Wiz

The NIST AI Risk Management Framework (AI RMF) is a guide designed to help organizations manage AI risks at every stage of the AI lifecycle—from development to deployment and even decommissioning.

The EU Artificial Intelligence Act: A tl;dr

Team di esperti Wiz

In this post, we’ll bring you up to speed on why the EU put this law in place, what it involves, and what you need to know as an AI developer or vendor, including best practices to simplify compliance.

AI Risk Management: Essential AI SecOps Guide

AI risk management is a set of tools and practices for assessing and securing artificial intelligence environments. Because of the non-deterministic, fast-evolving, and deep-tech nature of AI, effective AI risk management and SecOps requires more than just reactive measures.

The Threat of Adversarial AI

Team di esperti Wiz

Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.

What is LLM Jacking?

LLM jacking is an attack technique that cybercriminals use to manipulate and exploit an enterprise’s cloud-based LLMs (large language models).