CloudSec Academy

CloudSec Academy へようこそ。クラウドセキュリティの頭字語と業界用語のアルファベットスープをナビゲートするためのガイドです。 明確で簡潔、かつ専門的に作成されたコンテンツで、基本的なことからベストプラクティスまでをカバーします。

What is AI Red Teaming?

Wiz エキスパートチーム

Traditional security testing isn’t enough to deal with AI's expanded and complex attack surface. That’s why AI red teaming—a practice that actively simulates adversarial attacks in real-world conditions—is emerging as a critical component in modern AI security strategies and a key contributor to the AI cybersecurity market growth.

The Impact of AI in Software Development

Wiz エキスパートチーム

AI-assisted software development integrates machine learning and AI-powered tools into your coding workflow to help you build, test, and deploy software without wasting resources.

Generative AI Security: Risks & Best Practices

Wiz エキスパートチーム

Generative AI (GenAI) security is an area of enterprise cybersecurity that zeroes in on the risks and threats posed by GenAI applications. To reduce your GenAI attack surface, you need a mix of technical controls, policies, teams, and AI security tools.

AI/ML in Kubernetes Best Practices: The Essentials

Our goal with this article is to share the best practices for running complex AI tasks on Kubernetes. We'll talk about scaling, scheduling, security, resource management, and other elements that matter to seasoned platform engineers and folks just stepping into machine learning in Kubernetes.

The AI Bill of Rights Explained

Wiz エキスパートチーム

The AI Bill of Rights is a framework for developing and using artificial intelligence (AI) technologies in a way that puts people's basic civil rights first.

AI Compliance in 2025

Wiz エキスパートチーム

Artificial intelligence (AI) compliance describes the adherence to legal, ethical, and operational standards in AI system design and deployment.

AI-BOM: Building an AI-Bill of Materials

Wiz エキスパートチーム

An AI bill of materials (AI-BOM) is a complete inventory of all the assets in your organization’s AI ecosystem. It documents datasets, models, software, hardware, and dependencies across the entire lifecycle of AI systems—from initial development to deployment and monitoring.

NIST AI Risk Management Framework: A tl;dr

Wiz エキスパートチーム

The NIST AI Risk Management Framework (AI RMF) is a guide designed to help organizations manage AI risks at every stage of the AI lifecycle—from development to deployment and even decommissioning.

AIガバナンス:原則、規制、および実用的なヒント

Wiz エキスパートチーム

このガイドでは、AIガバナンスが組織にとって非常に重要になった理由を解説し、この分野を形作る主要な原則と規制に焦点を当て、独自のガバナンスフレームワークを構築するための実行可能な手順を提供します。

The EU AI Act

Wiz エキスパートチーム

この投稿では、EUがこの法律を施行した理由、その内容、AI開発者またはベンダーとして知っておくべきこと(コンプライアンスを簡素化するためのベストプラクティスなど)についてお伝えします。

LLM Security for Enterprises: Risks and Best Practices

Wiz エキスパートチーム

LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.

Data Leakage:リスク、原因、防止

データ漏洩とは、組織データが第三者に対して野放しに持ち出されることです。 これは、データベースの設定ミス、ネットワークサーバーの保護が不十分な、フィッシング攻撃、さらには不注意なデータ処理など、さまざまな手段で発生します。

AI Risk Management: Essential AI SecOps Guide

AI risk management is a set of tools and practices for assessing and securing artificial intelligence environments. Because of the non-deterministic, fast-evolving, and deep-tech nature of AI, effective AI risk management and SecOps requires more than just reactive measures.

The Threat of Adversarial AI

Wiz エキスパートチーム

Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.

What is LLM Jacking?

LLM jacking is an attack technique that cybercriminals use to manipulate and exploit an enterprise’s cloud-based LLMs (large language models).

What is a Data Poisoning Attack?

Wiz エキスパートチーム

Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models.

Dark AI Explained

Wiz エキスパートチーム

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools.

7 AI Security Risks You Can't Ignore

Wiz エキスパートチーム

Learn about the most pressing security risks shared by all AI applications and how to mitigate them.