Skip to main content

Contributions to Sigma: CloudTrail/ECS mappings, overrides and S2AN

We're excited to announce our contributions to the Sigma project!

From Sigma's project page:

Sigma is a generic and open signature format that allows you to describe relevant log events in a straight forward manner. The rule format is very flexible, easy to write and applicable to any type of log file. The main purpose of this project is to provide a structured form in which researchers or analysts can describe their once developed detection methods and make them shareable with others.

In this post we'll discuss three different topics: a new software we open-sourced , the inclusion of a new config/mapping and a new feature, called overrides that is now available for Sigma users. 


Starting with our latest software release.

Sigma2AttackNet is a standalone, developed in .NET Core, pre-compiled binary for Linux and Windows that runs through a folder of Sigma rules and creates an ATT&CK Navigator layer.

We developed S2AN so that we could incorporate the creation of a ATT&CK coverage map from within a repository with a minimal build environment. It is based on the Sigma2Attack script present in the Sigma project.

You can visit the S2AN project page to get download links as well as the source code.

AWS CloudTrail to Elastic Common Schema and AWS Exported Field

Starting today you can generate ECS mapped queries for AWS CloudTrail. You can instruct sigmac to format per ECS standards using the ecs-cloudtrail.yml mapping file that is now included in Sigma.

This contribution goes hand in hand with the work we had previously open sourced through the release of the ECS-CloudTrail pipeline.

The mapping in this file extends to Elastic AWS Exported Fields (which are applied to all Beats software). While not all fields are applicable (some are just temporarily used for processing) we keep an inventory of what you are expected to find when using ECS and Exported Fields.   

We hope that by continuing development of open source software that allows connecting these dots we can increase the community interest in AWS threat hunting and discussions around detection engineering. 

We have included an example query construction that utilizes CloudTrail and ECS in the next section of this post.

Sigma Overrides

A new feature for Sigma that allows for additional detection parameters as well as custom field replacements. We developed overrides with two objectives:
  • Provide the capability of developing detection queries for processed fields
  • Provide the capability of having custom fields per log source or schema
Processed fields and supporting them on Sigma

Let's dive a bit deeper on what we're considering processed fields.
Continuing with ECS as our schema of choice, and in particular, the event.outcome field, ECS has a series of conditions that, depending on the log source, will define what the value of this field should be.

During pre-processing (think for example in Filebeat or through an ingestion pipeline in Elasticsearch - our pipeline being a good example), a series of conditions are evaluated to determine if the value should be success or failure. While these conditions are made from fields present in the log source they will often be removed from the event after a decision has been made.

Let's assume that event.outcome for log source X depends on the addition of two values from two fields being superior or equal to two (2).

Our data:


In pre-processing the following will happen:  

fieldA + fieldB.

If the outcome of that addition >= 2, event.outcome will hold the value success. If not, event.outcome will be set to failure.

While, in theory, we could develop a Sigma rule that made the same check and determined the same condition, ECS pipelines, for example, removes fieldA and fieldB after the pre-processing pipeline reaches a conclusion, effectively leaving us with just the outcome of the process: event.outcome:<value>.

Sigma overrides give the config/mapping maintainer the capability of defining those some conditions on the source mapping file, while giving the user (rule developer) the possibility of developing detection that take into account the fields as they exist in the desired output schema.

How are overrides configured?

Defining the condition (logic) for which a replacement will take place can be done in either regexes or through literals. Overrides work after mappings are put in place.

Below is a configuration that utilizes literals:

The usage of regular expressions simplifies the process. The same configuration using the regexes approach:

The full configuration of the override that goes into the Sigma mappings file, based on the examples and schema discussed throughout this post, would be:

This overrides configuration, in particular, will perform the same logic that is being put in place by Elastic in their classification of success or failure for the AWS CloudTrail source.

Which targets can make use of overrides?

We have included support for the following targets:
  • es-qs
  • es-rule
  • kibana
  • xpack-watcher
  • elastalert 
If you'd like to see support added for a target that is not listed, feel free to reach out to us.

Will this change the way rules are developed?

Short answer is: no. Rules are done in the typical Sigma format. Here's an example:

This particular rule will look for a failed attempt at exporting an EC2 instance. You'll see that we've included a filter that goes over 3 fields in order to determine if the operation was a failure.

When the logic that is put in place on the rule matches the overrides configuration, the fields will be replaced accordingly.

Here's the output of that same rule using our recently provided AWS CloudTrail mappings file with the overrides configuration that was shown above:

The following diagram explains the logic behind overrides:

Conversation Starter

We're excited to be able to share these developments with the community and we'd like to get your feedback.

If this is useful to you, or if it fails, we'd like to know. Reach out to us on Twitter or join our Community Slack if you like to discuss this or some of our other open source projects.

Popular posts from this blog

Community Update - 3CORESec Blacklist 📓 🍯

Recently we tweeted about some issues we had with 3CORESec Blacklist , a platform that shares - openly and freely - a subset of the information seen and processed by our honeypot network.  While those issues have been addressed, and seeing as significant changes were made to how we monitor the generation of the lists (which is reflected in our status page ) and how we determine if an IP should be listed as an offending IP or not, this felt like a good opportunity to write a bit more about the platform as well as the changes we made.   For regular users of Blacklist 📓 the first thing they’ll notice is an increase on the numbers of IPs we include. That is a direct result of the changes we made and the growth of the honeypot network itself. We have not - and will not - ever increase the period for which we query the honeypot network, as we believe anything higher than 24h (as specified in the project page) for IP addresses can quickly fall into a decaying state that adds little value

Detection as Code (DaC) challenges - Introducing Automata

This blog post is the second part of our Detection as Code (DaC) challenges series. You can read part one here . The development process of detections by itself doesn't pose a lot of barriers for security engineering teams, as they are typically done in a lab/controlled environment, but after tuning and deploying rules to a SIEM, the work is only starting. Many things can go wrong after this, and a process of continued and automated testing is crucial. Detection Validation In an ideal (and fictional) world, once the datasets are parsed, normalized, and put into production, detections developed by your team would work forever. Still, the reality is quite different. Maintenance is heavy work that needs to be done frequently - especially if you work on an MSP - but the reality is that the ecosystem lacks tooling and processes to do it proactively. Effectiveness is an important metric and crucial to the successful response of incidents in time, and effectiveness is what we aim to ensu

Trapdoor - The serverless HTTP honeypot

  Today we are announcing the release of Trapdoor , our AWS-based serverless honeypot.  The idea of a serverless honeytoken isn't new. Adel released his honeyLambda a few years ago and we've been working with it for quite some time. It was because of this experience and the goal of improving on what was already a great idea that we decided to go to the drawing board and see how we would change and tweak the concept.  What is it? Trapdoor is a serverless application that can be deployed in any AWS environment. Its goal is to receive HTTP requests with the intent of identifying, and alerting, on its visitors. The URLs generated by Trapdoor can also be referred to as honeytokens .  While you can get creative on how to use it, one of the goals of a honeytoken is to be hidden or stored in a "safe" place and, if accessed, fire of an alarm, as access to the token would be considered a compromise or unauthorized access.  This example is the passive way of using deception ta