Wiz Defend is Here: Threat detection and response for cloud

Uncovering Hybrid Cloud Attacks Part 3 – The Response

In the final section of this blog series on uncovering complex hybrid cloud attacks, we’ll share key elements of the response to the real-world sophisticated cloud attack outlined in Part 2.

7 minutes read

In the final section of this blog series on uncovering complex hybrid cloud attacks, we’ll share key elements of the response to the real-world sophisticated cloud attack outlined in Part 2. To protect the victim organization’s identity, certain details of the attack have been modified and combined with other attacks seen in the wild, however every stage of the presented case study was performed by real attackers and responders.

By analyzing the response efforts as they developed from the defender perspective, we can highlight how intelligence-driven incident response can be leveraged to defeat even the most sophisticated attacks.

We now turn to the incident response and investigation efforts performed by the victim organization. Instead of detailing every element of these long efforts, our focus will be the key challenges faced due to the hybrid and long-term nature of the attack, and the ways they were eventually resolved by leveraging effective intelligence-driven incident response.

Initial Detection and Cloud Remediation

The victim organization’s security team first began investigating the attack when a cloud engineer working on an EC2 instance in the production AWS environment noticed that local OS logs were missing from the instance. In attempting to figure out what happened to the missing logs, the investigation was eventually escalated to security team members who expanded their scope and identified similar anomalies on multiple EC2 instances.

Investigating further, they took an AMI snapshot of one such instance and followed their forensic playbook to analyze it. This analysis identified reverse shells left behind by attackers, and immediately led investigators to understand they were dealing with a significant incident.

In addition to the immediate concern generated by the newly detected compromise of sensitive production EC2 instances, the team was highly concerned with uncovering the root cause of the incident. By cross-referencing the creation timestamps of identified attacker reverse shells and AWS CloudTrail logs, of which only a portion was retained in the organizational SIEM for the relevant timeframe of over a year beforehand, investigators were able to identify a suspicious pattern.

Shortly before malicious reverse shells were placed on several EC2 instances, the same AWS IAM user performed a “GetPasswordData” operation targeting those same instances. These operations would provide whoever triggered them with the local administrator passwords to the targeted EC2 instances, enabling them full access to install the reverse shells and continuously delete local OS logs.

Having discovered this connection, investigators turned to examining the remainder of the suspected IAM user’s activities. While full visibility was not possible due to log retention limitations, access to S3 buckets and highly sensitive RDS databases was identified. The full scope of this access and exactly what data was exfiltrated remains unknown to this day, due to audit policy modifications enacted by attackers.

Understanding that every step of the unfolding attack appears to have been perpetrated through access to this single privileged IAM user, the organization proceeded to change its credentials. As an additional precautionary measure, all privileged AWS credentials were rotated and all EC2 instances accessed by attackers were quarantined and recovered from verified secure backup versions.

In addition, tampered logging policies were restored, additional logging including VPC Flow Logs were configured, and specific detection rules were added to the SIEM. As weeks went by since these initial eradication and remediation activities and no further malicious activity was detected, it appeared like the attack was effectively dealt with.

The Return of Attackers

Several weeks after the initial response effort was finished, security team members were surprised to see one of their newly created alerts being triggered – someone was trying to modify the audit policies on RDS databases again.

A quick triage process revealed it was the same previously compromised privileged IAM user performing the operations, without the knowledge or approval of any members of the cloud engineering or dev-ops teams.

A deeper analysis of the logs revealed that this user had been exfiltrating data from the sensitive RDS database for three days before attempting to re-modify the audit policies, successfully stealing new sensitive information before being detected. 

As the team quickly remediated the changed audit policies and re-rotated the compromised IAM user credentials, they now had a lot more information in order to try and identify the root cause. The latest malicious access was performed with the new AWS access key issued to the compromised user after the previous investigation, indicating it was compromised again.

Attempts to investigate access to the AWS Secrets Manager secret which stored the new credential revealed no unusual access – the key was only accessed from known IP addresses by users belonging to dev-ops and cloud engineering employees. This finding triggered a new fear in the minds of senior executives advised of the situation: could this be an insider threat?

Insider threat investigations are notoriously difficult, and the ensuing rabbit hole may have taken months to resolve while missing the real story of this attack. Luckily, incident responders had another investigative avenue to exhaust – leveraging threat intelligence to drive the investigation.

At this stage, investigators had very little information regarding the attackers behind the compromise. The one key piece of information they did have was a small set of IP addresses leveraged by attackers throughout the attack.

While IP addresses used to access most cloud services belonged to well-known anonymizing VPN services, VPC Flow Logs enabled after the previous investigation now revealed C2 IP addresses with which attacker reverse shells were communicating. Cross-referencing these IP addresses with multiple public threat intelligence engines revealed an HTTPS server which likely served as the attackers main C2. 

Identifying this server enabled further analysis and gathering of threat intelligence, revealing an SSL certificate with unique characteristics was installed on it. Specifically, the “issued by” and “issued to” fields of the certificate included uncommon names. Once again leveraging public threat intelligence engines, investigators performed an internet-wide scan to identify any other servers which may be using SSL certificates with same unique characteristics. This step revealed a dramatic finding – only one other server on the internet was found to have a matching SSL certificate. 

Armed with the understanding that this newly identified server is likely owned by the same attacker, investigators ran a wide search across their SIEM to identify any potential communication to the server’s IP address. Instead of revealing further evidence of compromise in the production cloud environment, this search yielded a surprising result – a machine inside the company’s on-premises corporate environment was communicating with this IP address through the organizational firewall.

This machine was a jump server compromised by attackers in the early stages of the attack. These findings would not have been possible without taking a true intelligence-driven approach to incident response. The combination of public and internal intelligence with classic log analysis yielded critical investigative progress which, in this case and many others, would not have been otherwise possible. 

Thrilled with their new finding, investigators quickly identified a reverse shell installed by attackers on the jump server. They now understood why rotating the compromised AWS credentials didn’t do them much good. Given the hybrid nature of the attack, attackers simply went back to the jump server and waited for someone to use it and bring the credentials right back to them.

Unable to fully investigate the original breach of the jump server due to log retention limitations, the team nevertheless eradicated the reverse shell, re-rotated the compromised credentials, and believed they defeated the attack.

Uncovering the Full Attack Path

When the same alert for modifying RDS audit policies was triggered again after a few weeks, the security team was hitting a new level of frustration. Immediate triage revealed the same IAM user was again targeting RDS, unbeknownst to any of the dev-ops or engineering teams. Retracing their steps, investigators found evidence of new reverse shells on the same previously compromised jump server, this time communicating with new attacker IP addresses. This reverse shell was installed shortly after the previous eradication and remediation activities, meaning this time it was certainly within log retention periods.

While performing forensics on the jump server, the team was able to correlate the creation of the new reverse shell to a dev-ops employee’s RDP session from a Citrix VDI. Unfortunately, the VDI is ephemeral by design and was long-gone by the time of the investigation.

Stumped again by how to proceed, security team members attempted a Hail Mary and asked the dev-ops employee if they could possibly investigate the personal home computer he was using to access Citrix. The slightly confused employee was eventually happy to cooperate and provided the device for the team’s review. 

Once they had the employee’s personal computer, unravelling the rest of the story was easy work for the experienced incident responders. Forensic evidence quickly revealed the initial malicious payload still running on the device, communicating with the same IP address revealed through the team’s threat intelligence analysis.

Further requesting the employee’s permission, investigators were finally able to identify the original social engineering and phishing messages which instigated the entire attack. Almost two years after it began, the company finally fully understood the scope of this sophisticated hybrid home-office-cloud attack. Fig. 1 depicts the high-level process of unravelling the attack performed by incident responders.

Conclusion

Observing the full investigative effort, it becomes abundantly clear that such a comprehensive result was only obtainable due to the effective combination of classic log analysis, forensics, public threat intelligence and internal intelligence data. This case study serves to highlight the dramatic potential impacts of effectively leveraging intelligence-driven incident response to investigate sophisticated cloud cyberattacks.

The wide potential attack surface for initial compromise, combined with the ease at which attackers dominate hybrid on-premises and cloud environments often necessitates the use of public and private intelligence in order to achieve a meaningful investigation. The combination of centralized solutions for managing internal intelligence with leveraging public cyber threat intelligence, can make the difference between success and failure in identifying and eradicating the root causes of an attack.

When implemented in this diligent form, intelligence-driven incident response provides security professionals with a fighting chance against a sophisticated and rapidly evolving threat landscape.

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management