The Threat of Adversarial AI
Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.
Welcome to CloudSec Academy, your guide to navigating the alphabet soup of cloud security acronyms and industry jargon. Cut through the noise with clear, concise, and expertly crafted content covering fundamentals to best practices.
See how Wiz turns cloud security fundamentals into real-world results.
Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.
DAST, or dynamic application security testing, is a testing approach that involves testing an application for different runtime vulnerabilities that come up only when the application is fully functional.
Agentless and agent-based security each have a role in cloud environments. Learn the key differences and how to choose the right model for your infrastructure.
Kubernetes Security Posture Management (KSPM) is the practice of monitoring, assessing, and ensuring the security and compliance of Kubernetes environments.
Watch how Wiz turns instant visibility into rapid remediation.
ChatGPT security is the process of protecting an organization from the compliance, brand image, customer experience, and general safety risks that ChatGPT introduces into applications.
Master vulnerability scanning with this detailed guide. You’ll learn about scanning types, how scanning works, how to pick the right scanning tool, and more.
Learn how cloud infrastructure entitlement management (CIEM) enforces least privilege, cuts excessive permissions, and strengthens your cloud security posture.
Container runtime security is the combination of measures and technology implemented to protect containerized applications at the runtime stage.
SOAR tools unify your operational workflow, allowing you to ingest alerts from fragmented sources and automate the repetitive aspects of incident response.
Start with investigation and triage (lowest risk, fastest value), then move to response automation, then vulnerability prioritization. Trying to do everything at once is how implementations stall.
Open source intelligence (OSINT) is the process of collecting, analyzing, and converting publicly available information about an organization's digital footprint into clear technical insights that guide security decisions.
Kubernetes as a service (KaaS) is a model in which hyperscalers like AWS, GCP, and Azure allow you to quickly and easily start a Kubernetes cluster and begin deploying workloads on it instantly.
ASPM moves beyond alert aggregation to validate real exploitability. See how application security posture management helps you prioritize and remediate faster.
The AI Bill of Rights is a framework for developing and using artificial intelligence (AI) technologies in a way that puts people's basic civil rights first.
Cloud migration security is a facet of cybersecurity that protects organizations from security risks during a transition to cloud environments from legacy infrastructure, like on-premises data centers.
AI-DLC is an AI-centric approach to software development that positions AI as the primary executor across every phase of the lifecycle, from planning through operations, while humans provide strategic direction, approval, and oversight.
An application security engineer (AppSec engineer) secures the software development lifecycle by integrating security practices into design, code, and deployment workflows.
Threat intelligence platforms (TIPs) aggregate attacker data from OSINT, dark web sources, commercial feeds, and adversary infrastructure to highlight the threats most likely to be exploited.
A container runtime is the foundational software that allows containers to operate within a host system.