MITRE ATTACK Framework: Tactics, Techniques and More
Learn use cases, tactics, and the foundations of the MITRE ATTACK (also known as MITRE ATT&CK) framework and how to leverage it for improved cloud security.
Bienvenido a CloudSec Academy, tu guía para navegar por la sopa de alfabeto de los acrónimos de seguridad en la nube y la jerga de la industria. Cortar el ruido con contenido claro, conciso y elaborado por expertos que cubra los fundamentos de las mejores prácticas.
Learn use cases, tactics, and the foundations of the MITRE ATTACK (also known as MITRE ATT&CK) framework and how to leverage it for improved cloud security.
To defend against malware in the cloud, businesses need a detection and response solution that’s built for the cloud, fluent in cloud-based indicators of compromise (IOCs), and enriched by cloud threat intelligence.
Credential stuffing attacks can cost a breached organization millions in fines per year. Learn more about foundations, solutions, and real-life cases.
There are many sneaky AI security risks that could impact your organization. Learn practical steps to protect your systems and data while still leveraging AI's benefits.
Data exfiltration is when sensitive data is accessed without authorization or stolen. Just like any data breach, it can lead to financial loss, reputational damage, and business disruptions.
Privilege escalation is when an attacker exploits weaknesses in your environment or infrastructure to gain higher access and control within a system or network.
Lateral movement is a cyberattack technique used by threat actors to navigate a network or environment in search of more valuable information after gaining initial access.
A brute force attack is a cybersecurity threat where a hacker attempts to access a system by systematically testing different passwords until a correct set of credentials is identified.
An attack surface is refers to all the potential entry points an attacker could exploit to gain unauthorized access to a system, network, or data.
Cryptojacking is when an attacker hijacks your processing power to mine cryptocurrency for their own benefit.
Remote code execution refers to a security vulnerability through which malicious actors can remotely run code on your systems or servers.
Malicious code is any software or programming script that exploits software or network vulnerabilities and compromises data integrity.
Shadow AI is the unauthorized use or implementation of AI that is not controlled by, or visible to, an organization’s IT department.
A security misconfiguration is when incorrect security settings are applied to devices, applications, or data in your infrastructure.
La TI en la sombra es el uso no autorizado por parte de un empleado de servicios, aplicaciones y recursos de TI que no están controlados por el departamento de TI de una organización ni son visibles para él.
Uncover the top cloud security issues affecting organizations today. Learn how to address cloud security risks, threats, and challenges to protect your cloud environment.
Cross-site request forgery (CSRF), also known as XSRF or session riding, is an attack approach where threat actors trick trusted users of an application into performing unintended actions.
Data sprawl refers to the dramatic proliferation of enterprise data across IT environments, which can lead to management challenges and security risks.
In this blog post, we’ll explore security measures and continuous monitoring strategies to prevent these leaks, mitigating the risks posed by security vulnerabilities, human error, and attacks.
LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.
La fuga de datos es la exfiltración incontrolada de datos de la organización a un tercero. Se produce a través de varios medios, como bases de datos mal configuradas, servidores de red mal protegidos, ataques de phishing o incluso un manejo de datos descuidado.
ChatGPT security is the process of protecting an organization from the compliance, brand image, customer experience, and general safety risks that ChatGPT introduces into applications.
Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.
LLM jacking is an attack technique that cybercriminals use to manipulate and exploit an enterprise’s cloud-based LLMs (large language models).
Credential access is a cyberattack technique where threat actors access and hijack legitimate user credentials to gain entry into an enterprise's IT environments.