From Sigma's project page:
Sigma is a generic and open signature format that allows you to describe relevant log events in a straight forward manner. The rule format is very flexible, easy to write and applicable to any type of log file. The main purpose of this project is to provide a structured form in which researchers or analysts can describe their once developed detection methods and make them shareable with others.
In this post we'll discuss three different topics: a new software we open-sourced , the inclusion of a new config/mapping and a new feature, called overrides that is now available for Sigma users.
Sigma2AttackNet is a standalone, developed in .NET Core, pre-compiled binary for Linux and Windows that runs through a folder of Sigma rules and creates an ATT&CK Navigator layer.
We developed S2AN so that we could incorporate the creation of a ATT&CK coverage map from within a repository with a minimal build environment. It is based on the Sigma2Attack script present in the Sigma project.
You can visit the S2AN project page to get download links as well as the source code.
This contribution goes hand in hand with the work we had previously open sourced through the release of the ECS-CloudTrail pipeline.
The mapping in this file extends to Elastic AWS Exported Fields (which are applied to all Beats software). While not all fields are applicable (some are just temporarily used for processing) we keep an inventory of what you are expected to find when using ECS and Exported Fields.
We hope that by continuing development of open source software that allows connecting these dots we can increase the community interest in AWS threat hunting and discussions around detection engineering.
We have included an example query construction that utilizes CloudTrail and ECS in the next section of this post.
Let's dive a bit deeper on what we're considering processed fields.
Continuing with ECS as our schema of choice, and in particular, the event.outcome field, ECS has a series of conditions that, depending on the log source, will define what the value of this field should be.
During pre-processing (think for example in Filebeat or through an ingestion pipeline in Elasticsearch - our pipeline being a good example), a series of conditions are evaluated to determine if the value should be success or failure. While these conditions are made from fields present in the log source they will often be removed from the event after a decision has been made.
Let's assume that event.outcome for log source X depends on the addition of two values from two fields being superior or equal to two (2).
Our data:
fieldA=1
fieldB=1
In pre-processing the following will happen:
fieldA + fieldB.
If the outcome of that addition >= 2, event.outcome will hold the value success. If not, event.outcome will be set to failure.
While, in theory, we could develop a Sigma rule that made the same check and determined the same condition, ECS pipelines, for example, removes fieldA and fieldB after the pre-processing pipeline reaches a conclusion, effectively leaving us with just the outcome of the process: event.outcome:<value>.
Sigma overrides give the config/mapping maintainer the capability of defining those some conditions on the source mapping file, while giving the user (rule developer) the possibility of developing detection that take into account the fields as they exist in the desired output schema.
How are overrides configured?
Defining the condition (logic) for which a replacement will take place can be done in either regexes or through literals. Overrides work after mappings are put in place.
Below is a configuration that utilizes literals:
Sigma is a generic and open signature format that allows you to describe relevant log events in a straight forward manner. The rule format is very flexible, easy to write and applicable to any type of log file. The main purpose of this project is to provide a structured form in which researchers or analysts can describe their once developed detection methods and make them shareable with others.
In this post we'll discuss three different topics: a new software we open-sourced , the inclusion of a new config/mapping and a new feature, called overrides that is now available for Sigma users.
S2AN
Starting with our latest software release.Sigma2AttackNet is a standalone, developed in .NET Core, pre-compiled binary for Linux and Windows that runs through a folder of Sigma rules and creates an ATT&CK Navigator layer.
We developed S2AN so that we could incorporate the creation of a ATT&CK coverage map from within a repository with a minimal build environment. It is based on the Sigma2Attack script present in the Sigma project.
You can visit the S2AN project page to get download links as well as the source code.
AWS CloudTrail to Elastic Common Schema and AWS Exported Field
Starting today you can generate ECS mapped queries for AWS CloudTrail. You can instruct sigmac to format per ECS standards using the ecs-cloudtrail.yml mapping file that is now included in Sigma.This contribution goes hand in hand with the work we had previously open sourced through the release of the ECS-CloudTrail pipeline.
The mapping in this file extends to Elastic AWS Exported Fields (which are applied to all Beats software). While not all fields are applicable (some are just temporarily used for processing) we keep an inventory of what you are expected to find when using ECS and Exported Fields.
We hope that by continuing development of open source software that allows connecting these dots we can increase the community interest in AWS threat hunting and discussions around detection engineering.
We have included an example query construction that utilizes CloudTrail and ECS in the next section of this post.
Sigma Overrides
A new feature for Sigma that allows for additional detection parameters as well as custom field replacements. We developed overrides with two objectives:- Provide the capability of developing detection queries for processed fields
- Provide the capability of having custom fields per log source or schema
Let's dive a bit deeper on what we're considering processed fields.
Continuing with ECS as our schema of choice, and in particular, the event.outcome field, ECS has a series of conditions that, depending on the log source, will define what the value of this field should be.
During pre-processing (think for example in Filebeat or through an ingestion pipeline in Elasticsearch - our pipeline being a good example), a series of conditions are evaluated to determine if the value should be success or failure. While these conditions are made from fields present in the log source they will often be removed from the event after a decision has been made.
Let's assume that event.outcome for log source X depends on the addition of two values from two fields being superior or equal to two (2).
Our data:
fieldA=1
fieldB=1
In pre-processing the following will happen:
fieldA + fieldB.
If the outcome of that addition >= 2, event.outcome will hold the value success. If not, event.outcome will be set to failure.
While, in theory, we could develop a Sigma rule that made the same check and determined the same condition, ECS pipelines, for example, removes fieldA and fieldB after the pre-processing pipeline reaches a conclusion, effectively leaving us with just the outcome of the process: event.outcome:<value>.
Sigma overrides give the config/mapping maintainer the capability of defining those some conditions on the source mapping file, while giving the user (rule developer) the possibility of developing detection that take into account the fields as they exist in the desired output schema.
How are overrides configured?
Defining the condition (logic) for which a replacement will take place can be done in either regexes or through literals. Overrides work after mappings are put in place.
Below is a configuration that utilizes literals:
The full configuration of the override that goes into the Sigma mappings file, based on the examples and schema discussed throughout this post, would be:
This overrides configuration, in particular, will perform the same logic that is being put in place by Elastic in their classification of success or failure for the AWS CloudTrail source.
Which targets can make use of overrides?
We have included support for the following targets:
- es-qs
- es-rule
- kibana
- xpack-watcher
- elastalert
Will this change the way rules are developed?
Short answer is: no. Rules are done in the typical Sigma format. Here's an example:
This particular rule will look for a failed attempt at exporting an EC2 instance. You'll see that we've included a filter that goes over 3 fields in order to determine if the operation was a failure.
When the logic that is put in place on the rule matches the overrides configuration, the fields will be replaced accordingly.
Here's the output of that same rule using our recently provided AWS CloudTrail mappings file with the overrides configuration that was shown above:
The following diagram explains the logic behind overrides:
Conversation Starter
We're excited to be able to share these developments with the community and we'd like to get your feedback.If this is useful to you, or if it fails, we'd like to know. Reach out to us on Twitter or join our Community Slack if you like to discuss this or some of our other open source projects.