Attack Detection Fundamentals 2021: AWS - Lab #3

By Alfie Champion on 21 April, 2021

Alfie Champion

21 April, 2021

In part three of WithSecure Consulting's Attack Detection Fundamentals workshop series for 2021, we covered an end-to-end kill chain, from initial access and discovery using some 'compromised' credentials, through to the installation of persistence and the exfiltration of data from an S3 bucket.

The slides and recording for this workshop can be found here and here respectively.

In the previous lab, we started making changes to the target account. We leveraged the privileged access of our compromised user to add an additional access key, add a login profile and logged into the web console to take a better look around the account.

In the final lab of this workshop, we're turning our attention to the customer data S3 bucket we saw in our user's inline policy in lab one. We'll explore the files present in the bucket, before downloading the contents to our local system. We'll then turn our attention to elevating our privileges to delete the customer data we find. Finally, we'll use Athena once more to take a look at the bucket-level and object-level CloudTrail events, as well as the standalone S3 server access logs, we've configured for comparison.

NOTE: The corresponding CloudTrail log can take fifteen minutes or more to arrive following an API call being made, so expect some delay following your activities!

Required Tools

DISCLAMER: Set up of the tools and the testing environment is not covered comprehensively within this lab. We will assume basic familiarity with command line and the ability of the reader to build the necessary tools.

Walkthrough

Exfiltration

Let's start by listing the buckets and their contents to see what's worth downloading. Returning to the AWS CLI, we can list all buckets in our account with the below command. This is due to our "s3:ListAllMyBuckets" permission our compromised user is provisioned with.

Now that we know the names of the respective buckets, we can attempt to list their contents. Let's start with the "Customer Data" bucket.

Here we can see there are three dummy data files for customers a, b and c. Just for completeness, we'll attempt to list the contents of the "Log Storage" bucket too.

Access denied! If we take another look at the inline policy we retrieved in lab one, we can see this is entirely expected behaviour. We have the ability to list and retrieve objects from our "Customer Data" bucket, and to list all buckets in the account, but there is nothing explicity permitting access to the "Log Storage" bucket, and as such we're met with the "AccessDenied" message.

Turning our attention back to the "Customer Data" bucket we can use the CLI "sync" command to save all files to a directory on the local system.

Now we have a local copy of our... err... 'sensitive data' and can turn our attention to a more destructive objective.

Impact

Much like our ability to list the contents of the "Log Storage" bucket, we've only explicitly been granted the "GetObject" privilege on the objects within the "Customer Data" bucket. As such, we don't have the ability to delete the customer files.

Leveraging our "iam:*" privilege once more, we can update our compromised user's privileges to facilitate this. Firstly, save the below policy document to your local system, we'll call it "policy.json". Note the "GetObject" permission has been replaced with a wildcard "s3:*". Unlike the IAM privilege, this policy block specifies the resource affected as "arn:aws:s3:::fsecure-aws-workshop-data-bucket/*" (rather than simply "*"). This means that within that bucket, we can read, create, modify and delete objects to our heart's content.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::fsecure-aws-workshop-data-bucket/*"
},
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::fsecure-aws-workshop-data-bucket"
},
{
"Action": [
"s3:ListAllMyBuckets",
"iam:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

With our policy created, we can apply it to the "customer_data_management_user" with the following CLI command:

aws iam put-user-policy --user-name customer_data_management_user --policy-name s3_access --policy-document file://mnt/c/Tools/data/policy.json

At this point, we can add a "ransom.txt" file to our local folder and re-execute the "sync" command, switching the source and remote paths. This effectively syncs the S3 bucket with our local directory, rather than the other way around.

From here we can go ahead and delete the customer records and confirm that only the "ransom.txt" file remains.

When our compromised user's owner searches their customer data, they'll be met with our demands!

Detection

Here we'll rely upon our bucket-level and object-level logging. We can confirm that our Terraform scripts have correctly configured data events (and server access logs) in the S3 bucket's "Properties" tab.

Returning to Athena, let's start by querying for all events from our compromised user relating to the S3 service. The following query will get us started:

SELECT
eventtime,
eventname,
eventsource,
requestparameters,
errorcode
FROM "fsecure_workshop_database"."cloudtrail_logs_[AWS_ACCOUNT_ID]"
WHERE userIdentity.username = 'customer_data_management_user'
AND eventsource = 's3.amazonaws.com'

Here we can see the bucket-level API call to "ListBuckets", followed by the object-level calls to "ListObjects" and "GetObjects". Notably, our "sync" command shows up as three separate "GetObject" calls, meaning we can see the actions taken against specific files in the bucket, rather than a notification that files have been downloaded en masse.

Turning our attention to more destructive efforts, we can view:

  • our initial failed "DeleteObject" events,
  • our "PutUserPolicy" call to provision us with the necessary permissions,
  • our subsequently successful "DeleteObject" events.

While cropped in the below screenshot, we can see that the "PutUserPolicy" call includes the full inline policy we're applying; highlighting the change from a "s3:GetObject" permission to a relaxed "s3:*".

SELECT
eventtime,
eventname,
eventsource,
errorcode,
requestparameters
FROM "fsecure_workshop_database"."cloudtrail_logs_[AWS_ACCOUNT_ID]"
WHERE eventname in ('PutUserPolicy', 'DeleteObject')

As we've provisioned our lab with both S3 data events in CloudTrail and server access logs, we can compare the logs for the same actions. We can fetch filtered events from our second Athena table, "s3_access_logs_[AWS_ACCOUNT_ID]", with the following query:

SELECT
requestdatetime,
bucket_name,
operation,
key,
errorcode
FROM "fsecure_workshop_database"."s3_access_logs_[AWS_ACCOUNT_ID]"
WHERE requester = 'arn:aws:iam::[AWS_ACCOUNT_ID]:user/customer_data_management_user';

Comparing the metadata available in CloudTrail and our server access logs, we can see some differences between what's provided. Aside from the ARN of the user initiating the request, some user details are lost with server access logs. As an example, we can no longer see whether MFA is present for the session. Other API calls against the S3 service, e.g. those modifying ACLs, versioning, encryption, etc. are also outside of the scope of the server access logs.

Conclusions

Wrapping up the labs for this workshop, we've gone from some compromised credentials, performed some initial reconnaissance, and gained an understanding of the privileges we hold within the target AWS account. From there we've sought to maintain access to our account by creating a new AWS access key and adding a login profile, subsequently allowing us to browse resources through the AWS management console.

Finally, in this lab, we've exploited our privileges to download customer data from an S3 bucket, before updating the inline policy attached to our user to allow us to delete the customer records and replace them with our "ransom note".

Throughout the workshop labs, we've used Athena and queried the extensive telemetry provided by CloudTrail (with data events configured) to identify the above activity; considering opportunities to filter based on:

  • Read-only activity
  • User Agents
  • MFA-enabled sessions
  • Known source IP addresses

Finally, we've evaluated the telemetry provided by S3 server access logs. Using Athena once more to consider what these logs do and don't provide in comparison to our bucket-level and object-level events in CloudTrail.

Thanks for joining, see you next time!