Skip to main content

The undetectable way of exporting an AWS DynamoDB

In this post we'll go over how we found a limitation in the current AWS CloudTrail logging features that limit detection capabilities of possible abuse against AWS DynamoDB, in the event of the user's AWS IAM keys being compromised.

The objective of this article is to make users aware of this limitation and discuss alternatives for increasing the detection of attacks that might abuse it. This information was shared initially with AWS and only after going through the disclosure process with their security team did we decide to write it.

Introduction & Methodology 

A big part of the work done for our MDR platform comes from adversary simulation. Even though a lot of effort is put into developing our detection capabilities against a known killchain, we're constantly looking for alternative ways of attacking AWS workloads, regardless if the techniques are seen being used in the wild or not.

Our methodology for doing these simulations is quite straightforward. We start by selecting one service, of the many AWS has to offer, and we perform all possible actions that an adversary could do to benefit themselves.

We look at what logging is generated (most of the time in AWS CloudTrail) by these services and we start developing our capabilities (workflows, visualisations, alerting, remediation) based on that.

Recently, in a detection engineering hackaton, we focused on AWS DynamoDB.

What was found

The simulation was simple, even though scary to think of: An adversary had gained AWS IAM user credentials that had, at a bare minimum, permissions to read AWS DynamoDB tables. With these credentials the adversary's goal was to pull all information from a table (without going through AWS Glue)

Let's assume that the attacker does not know which tables exist in the account for which credentials were stolen. Through awscli, dynamodb allows us to list-tables:

$ aws dynamodb list-tables

{
    "TableNames": [
        "3cs-detect-table"
    ]
}

This event will generate a log that we can quickly identify as ListTables.


In the detection mentioned above we were looking at the provider of AWS DynamoDB logs (event.provider), the action that results in listing tables (event.action), done through the awscli (user_agent.name) tool and by a particular user (user.name).

The detection can vary depending on specific needs. From excluding IPs that are allowed to make requests via awscli to particular awscli versions. Detections are usually tweaked according to client environments and as long as a log exists, we can develop custom workflows for alerting and remediation.

Continuing with our attack, as we now have the details (table name) we need, we'd like to export the records of the 3cs-detect-table. Still within dynamodb sub-command of awscli we can run:

$ aws dynamodb scan --table-name 3cs-detect-table --query "Items[*].[id.N,name.S]" --output text

    3       frank
    2       john
    4       cees
    1       tiago


Now, we did make an assumption when running the above command: the schema of the table.

As an attacker, you'd have to guess what to query in the table from which you'd like to export content, but as you'll shortly see, brute-forcing it would not be a problem.

It's also relatively safe to say that a table will hold some sort of "id" with "N"=numerical content.

Based on that assumption we set out to build a detection that would find the invocation of the scanning/reading of a table when ran through awscli. To our surprise, there was no record in AWS CloudTrail of us scanning/reading the table through awscli.

While the scanning operation would be normal when used in an application (and those requests would come from the SDK, not the CLI), our goal was to detect when this functionality of AWS DynamoDB was invoked through awscli.

Unsure of our results, we reached out to AWS.

AWS Response

AWS has a track record of promptly addressing all remarks or concerns about security of the their cloud, and we got a chance of seeing that for ourselves. 

Below is a timeline of the interactions:
  1. Report/Questions are sent to the AWS Security Team (27/03)
  2. Acknowledgement of the report and follow up date scheduled (27/03)
  3. Extension of the follow up date due to current health situation (03/04)
  4. Closure of the case and further details about the finding (10/04)
According to AWS Security this issue had been reported before and the AWS DynamoDB team is "actively working on adding logging for these types of activities" as they are currently "gathering data on the feature"

While it is not possible to have the logging of this activity for all regions, AWS did information that 
users who want the functionality enabled can do so through a Support Request for workloads in US East (Ohio) and US West (Oregon) regions.

Even though AWS didn't confirm, we're pretty sure that this feature will be rolled out to other regions soon and eventually be part of standard AWS CloudTrail functionalities.  

Final Remarks

It's funny how creativity plays such a big role in the development of attack vectors. In this case it all started with a "what would AWS CloudTrail register if we tried to export an entire DynamoDB table straight to a file using stolen creds?" 

As important as it is to maintain commitment to the development of detection techniques against a known standard - 3CS MDR covers 80% of the AWS MITRE matrix - detection engineering is never complete and requires constant tuning, development and forward thinking. 

After getting confirmation from AWS that it would not be possible to detect these activities with the current version of AWS DynamoDB (at least in regions outside of the ones mentioned above) we decided to increase the risk score of the MDR rule that detects the usage of list-tables (as we already cover exfiltration attempts based on AWS Glue).

Join us

If you like this kind topic consider registering on our Community Slack where we talk threat hunting, NIDS and traffic analyses, general AWS security & more!

Want to take it a step further and already have some AWS threat hunting experience under your belt? If our vision resonates with you in any way, reach out to us at jobs@ so we can talk.

Popular posts from this blog

Trapdoor - The serverless HTTP honeypot

  Today we are announcing the release of Trapdoor , our AWS-based serverless honeypot.  The idea of a serverless honeytoken isn't new. Adel released his honeyLambda a few years ago and we've been working with it for quite some time. It was because of this experience and the goal of improving on what was already a great idea that we decided to go to the drawing board and see how we would change and tweak the concept.  What is it? Trapdoor is a serverless application that can be deployed in any AWS environment. Its goal is to receive HTTP requests with the intent of identifying, and alerting, on its visitors. The URLs generated by Trapdoor can also be referred to as honeytokens .  While you can get creative on how to use it, one of the goals of a honeytoken is to be hidden or stored in a "safe" place and, if accessed, fire of an alarm, as access to the token would be considered a compromise or unauthorized access.  This example is the passive way of using decept...

Community Update - 3CORESec Blacklist 📓 🍯

Recently we tweeted about some issues we had with 3CORESec Blacklist , a platform that shares - openly and freely - a subset of the information seen and processed by our honeypot network.  While those issues have been addressed, and seeing as significant changes were made to how we monitor the generation of the lists (which is reflected in our status page ) and how we determine if an IP should be listed as an offending IP or not, this felt like a good opportunity to write a bit more about the platform as well as the changes we made.   For regular users of Blacklist 📓 the first thing they’ll notice is an increase on the numbers of IPs we include. That is a direct result of the changes we made and the growth of the honeypot network itself. We have not - and will not - ever increase the period for which we query the honeypot network, as we believe anything higher than 24h (as specified in the project page) for IP addresses can quickly fall into a decaying state that adds litt...

Detection as Code (DaC) challenges - Introducing Automata

This blog post is the second part of our Detection as Code (DaC) challenges series. You can read part one here . The development process of detections by itself doesn't pose a lot of barriers for security engineering teams, as they are typically done in a lab/controlled environment, but after tuning and deploying rules to a SIEM, the work is only starting. Many things can go wrong after this, and a process of continued and automated testing is crucial. Detection Validation In an ideal (and fictional) world, once the datasets are parsed, normalized, and put into production, detections developed by your team would work forever. Still, the reality is quite different. Maintenance is heavy work that needs to be done frequently - especially if you work on an MSP - but the reality is that the ecosystem lacks tooling and processes to do it proactively. Effectiveness is an important metric and crucial to the successful response of incidents in time, and effectiveness is what we aim to ensu...