Wiz Defend ya está aquí: detección y respuesta ante amenazas para la nube

CloudSec Academy

Bienvenido a CloudSec Academy, tu guía para navegar por la sopa de alfabeto de los acrónimos de seguridad en la nube y la jerga de la industria. Cortar el ruido con contenido claro, conciso y elaborado por expertos que cubra los fundamentos de las mejores prácticas.

AI Governance: Principles, Regulations, and Practical Tips

Equipo de expertos de Wiz

In this guide, we’ll break down why AI governance has become so crucial for organizations, highlight the key principles and regulations shaping this space, and provide actionable steps for building your own governance framework.

LLM Security for Enterprises: Risks and Best Practices

Equipo de expertos de Wiz

LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.

Data Leakage: Risks, Causes, & Prevention

Data leakage is the unchecked exfiltration of organizational data to a third party. It occurs through various means such as misconfigured databases, poorly protected network servers, phishing attacks, or even careless data handling.

AI Risk Management: Essential AI SecOps Guide

AI risk management is a set of tools and practices for assessing and securing artificial intelligence environments. Because of the non-deterministic, fast-evolving, and deep-tech nature of AI, effective AI risk management and SecOps requires more than just reactive measures.

The Threat of Adversarial AI

Equipo de expertos de Wiz

Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.

What is LLM Jacking?

LLM jacking is an attack technique that cybercriminals use to manipulate and exploit an enterprise’s cloud-based LLMs (large language models).

What is a Prompt Injection Attack?

Prompt injection attacks are an AI security threat where an attacker manipulates the input prompt in natural language processing (NLP) systems to influence the system’s output.

What is a Data Poisoning Attack?

Equipo de expertos de Wiz

Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models.

Dark AI Explained

Equipo de expertos de Wiz

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools.

AI Security Tools: The Open-Source Toolkit

We’ll take a deep dive into the MLSecOps tools landscape by reviewing the five foundational areas of MLSecOps, exploring the growing importance of MLSecOps for organizations, and introducing six interesting open-source tools to check out

What is Shadow AI?

Equipo de expertos de Wiz

Shadow AI is the unauthorized use or implementation of AI that is not controlled by, or visible to, an organization’s IT department.

Explicación de la seguridad de la IA: cómo proteger la IA (AI Security)

Equipo de expertos de Wiz

La IA es el motor detrás de los procesos de desarrollo modernos, la automatización de la carga de trabajo y el análisis de big data. La seguridad de la IA es un componente clave de la ciberseguridad empresarial que se centra en defender la infraestructura de IA de los ciberataques.