We are excited to launch a new feature as part of the Securonix Next-Gen SIEM – the Securonix Analytics Sandbox.
Why Do You Need an Analytics Sandbox?
Whether you are a member of a blue team, a content developer or detection engineer, you have a common problem – testing policies without affecting security operations.
Fine-tuning new policies, team created content, or algorithms in production runs the risk of creating ‘noise’ in the form of excessive, non-verified alerts, false positives, and violations. This consumes your already busy threat verification and response team members.
The best way to prevent this is to test, tune, and validate use cases against real enterprise data prior to pushing them to a production environment. However, this is impossible to do with legacy SIEMs without consuming response resources and increasing data storage and compute requirements.
This is where the Securonix cloud platform architecture is a huge advantage. It can provide these functions without requiring additional infrastructure, all while improving quality assurance for your content packages. It allows you to test at scale, because small data sets often miss the inflection points. Finally, it allows for the creation of multiple test beds for different teams and team members. Data science, detection engineers, blue teams, and others, all with their own distinct use cases, are all able to analyze the impact of their ideas, threat models, and content without affecting the SOC team’s performance.
How Does the Analytics Sandbox Work?
The tester creates a policy in the sandbox and tests it against the current enterprise data set. The use case in the sandbox will not change the entity score as seen by the SOC until it is promoted from the sandbox into the production environment. This feature eliminates testing-related false positives from being generated and consuming SOC resources.
Any use case when moved from sandbox to production, will provide three options:
- Delete violations (risk score) and delete meta-data (behavior profile)
- Delete violations (risk score) and keep meta-data (behavior profile)
- Keep violation (risk score) and keep meta-data (behavior profile)
Policies running in the analytics sandbox will not affect operational ‘entity scores’ seen by the SOC. For example:
User A has an entity score of 5. If the policy “Email to self” is running in the sandbox and user A has violated it with a score of 2, the final entity score the SOC sees will remain 5 because “Email to self“ is running in the analytic sandbox, not in production. When the test policy is moved to production, depending on which option is chosen, the entity score will either be re-calculated (for option c) or will remain same (option a and option b).
Developing the Analytics Sandbox
We are especially grateful to three client organizations who took an active role in the development and testing of the Securonix Analytics Sandbox.
The first client, a multinational finance organization, faced the challenge of having a smaller SOC team than their content development team. Per their audit and compliance requirements, every use case in production needed thorough testing. The content development team was required to test use cases against production data sets in order to ensure alerts were tuned correctly and the overall alert count was in a manageable range. However, the content development team did not have a dedicated instance to test use cases at scale.
With Securonix, all the use cases they have in development are now in Securonix Analytics Sandbox. The content development team can work on tuning their use cases and risk scoring before certifying and promoting the use cases to production. The SOC does not have access to the development use cases, and the use cases do not affect day to day operations.
The second client an energy company, has a hands-on insider threat team dedicated to developing new use cases and tuning existing content. For each change the team creates a new version of the policy and runs 2 to 3 versions of the use case in parallel. This had led to rescores being assigned to a user or entity multiple times, and has required a lot of manual clean up. The client wanted an environment where simulations could be done, and scores and violation counts could be previewed, without affecting regular operations.
The Securonix Analytics Sandbox enables the insider threat team to run variations of existing use cases in the sandbox. Results can be evaluated for both alert fidelity and rescoring before existing use cases are decommissioned and replaced with their newer variant.
The third client, a multinational pharmaceutical organisation, has three different teams interacting with the Securonix solution on a daily basis: security operations, security engineering and threat hunting. The security operations team focuses on investigating all alerts detected by the platform with a very tight SLA. The security engineering team focuses on performing attack simulations (red team), building new content, and updating existing content based on feedback provided. The threat hunting team focuses on continuous threat hunting (blue team) and coming up with new use cases to automate detections.
The client required the ability to deploy new content on an almost daily basis with the option to test against production logs without affecting the work of the security operations team. They wanted to perform new attack simulations every week and improve detection and content coverage with minimal downtime.
The Securonix Analytics Sandbox enables the security engineering and threat hunting teams to collaborate effectively in creating and deploying new content in the production environment. New use cases are created in the analytics sandbox and attack simulations are performed against production logs in order to evaluate use cases for both true positives and false positives before the security operations team reviews them.
As a security operations, security engineering, or threat hunting team, the Securonix Analytics Sandbox has much to offer. Should you have questions please get in touch.