diet-okikae.com

Assessing Security Tools Against the ATT&CK Framework: A Comprehensive Approach

Written on

Chapter 1: Introduction to Security Tool Assessment

This article forms part of my ongoing capstone blogging series, documenting my project as it unfolds. For a complete list of articles, you can refer to my Cybersecurity Master's Capstone compilation. To maintain confidentiality, I will refer to the companies I am analyzing as Company A, B, C, etc.

Assuming you are familiar with my Project Proposal, I won’t revisit the goals and objectives here. If you haven’t read it yet, I encourage you to do so for essential background information.

In Parts One and Two, I outlined the initial steps of my project, which involved identifying the threat actors and malware types targeting Company A and its sector. I then gathered the tactics, techniques, and procedures (TTPs) used in historical cyber incidents. This data was subsequently compiled to create an ATT&CK framework heatmap that will be integral to the capabilities mapping discussed in this article.

As a brief recap, the previous article concluded with the completion of an ATT&CK matrix heatmap depicting the techniques employed by the threat actors against Company A and the associated malware variants.

ATT&CK matrix representation of priority TTPs

The heatmap employs a scoring system from 0 to 105, where the score reflects the number of cyber incidents in which each technique has been utilized. Techniques appearing in only one or two incidents are indicated in light yellow, whereas those highlighted in dark orange and red have been employed in a greater number of events and should therefore be prioritized for attention.

While this information is valuable, as discussed in Part Two, it does not yet offer sufficient insight for a security operations team. The next step in this assessment is to align security tool capabilities with these techniques, thereby allowing us to evaluate how well the company's defenses are positioned against these threats.

Section 1.1: Overview of the ATT&CK Navigator

Before delving into the research and evaluation of tools, I would like to provide a brief overview of the ATT&CK Navigator. This tool, while useful for exploring the ATT&CK framework, primarily assists in tracking and commenting on techniques and sub-techniques during assessments. It is customizable and allows users to export heatmaps in CSV or JSON formats for future reference.

As I grew more accustomed to using the tool, I decided to create various layers that reflect the TTPs based on the progress of my assessments. Ultimately, I established five distinct layers:

  1. Initial heatmap with priority TTPs highlighted
  2. Phase one heatmap showing EDR tool coverage
  3. Phase two heatmap displaying EDR and Internet filtering coverage
  4. Phase three heatmap illustrating EDR, Internet filtering, and perimeter security coverage
  5. Phase four heatmap indicating the final coverage of EDR, Internet filtering, perimeter security, and SIEM tool capabilities

Each layer represents a step in the assessment as I map the capabilities of each tool, with the final layer revealing which priority techniques are adequately covered.

In addition to determining how I would aggregate the assessments, I needed to devise a scoring and color scheme to accurately represent the detection level for each technique and sub-technique.

After some consideration, I settled on the following color and scoring system:

  • Gray: out of scope (non-priority techniques)
  • Yellow: no confidence of detection (none of the assessed security tools detected the technique)
  • Orange: some confidence of detection
  • Green: high confidence of detection

While this legend has more details, I will elaborate on it later. For now, let's proceed with the evaluation of security tools.

Section 1.2: Evaluating Security Tools

The MITRE ATT&CK framework is an invaluable resource for understanding the tactics and techniques used by threat actors, and it significantly enhances an organization's security operations program.

As discussed in this series, every organization faces a unique set of threat actors, malware types, and corresponding MITRE TTPs that are likely to be employed in attacks against them. While any threat actor may use a variety of malware and techniques, the ones identified are more probable within Company A's environment.

By evaluating how Company A's security tools align with these identified TTPs, we can derive substantial insights from this exercise. Understanding which techniques the existing security tools can identify or mitigate provides a clearer picture beyond merely knowing the techniques used by priority threat actors and malware variants.

Researching the capabilities of security tools initially appeared overwhelming. I lacked detailed knowledge of what each tool could or could not do at the ATT&CK technique level. While assumptions could be made, such an approach would not be prudent when evaluating the capabilities of the security toolset. The objective is to identify potential gaps, making it essential to be cautious rather than presuming a tool's detection or prevention capabilities.

To begin, I researched the ATT&CK framework coverage of popular security tools. After sifting through various results and refining my search terms, I discovered valuable resources: ATT&CK Evaluations.

ATT&CK Evaluations are publicly available assessments of security tools designed to help professionals better comprehend and defend against known behaviors (TTPs). These evaluations examine a vendor's ability to detect or thwart common behaviors associated with APT3, APT29, Carbanak+FIN7, and Wizard Spider + Sandworm.

Although I won’t delve into specifics, I used these evaluations as a resource. Fortunately, three of the four tools I chose to assess had undergone evaluations, allowing me to identify their detection and prevention capabilities.

Upon further investigation of the final tool, Zscaler, I found a comprehensive whitepaper detailing which of their product modules identified or failed to detect each ATT&CK technique and sub-technique. Although this evaluation was self-conducted and not performed by MITRE, it remains reliable due to its thorough descriptions and examples.

The first video titled "Assessing Your Student Learner" provides insights into evaluating learning outcomes effectively, which parallels our approach to assessing security tools.

Performing the Assessment

With resources compiled and a heatmap scheme established, I proceeded to the assessment phase. The ATT&CK evaluations for two of the tools I reviewed included three rounds: APT3, APT29, and Carbanak+FIN7. The final tool, a SIEM product, was assessed using APT29.

To map the capabilities identified during the ATT&CK evaluations, I meticulously reviewed each evaluation round and highlighted the in-scope techniques and sub-techniques detected by the tool.

For instance, examining the techniques revealed that Data from Local System, Input Capture, and Screen Capture were highlighted in the Priority TTP heatmap. In the first layer of my heatmap, I adjusted Data from Local System from orange to green, and Screen Capture from yellow to green. Input Capture was slightly different; it includes four sub-techniques, of which only one was detected, leading me to change the sub-technique Keylogging to green while keeping Input Capture at orange.

Heatmap progression in assessing security tools

I continued this process for each tactic, adjusting TTPs based on the coverage each tool could potentially provide. As I progressed, I recognized the need for an additional scoring metric to maintain consistency in assessing the level of sub-technique coverage that warranted high confidence in technique detection.

For example, Unsecured Credentials comprises seven sub-techniques. After evaluating all the tools, four of the seven sub-techniques were detected, while the others were not. This raised the question: should Unsecured Credentials remain in the Some Confidence category or be upgraded to High Confidence?

To address this, I established a scoring system:

  • Low Confidence: 0–20% sub-technique coverage
  • Some Confidence: 20–70% sub-technique coverage
  • High Confidence: 70–100% sub-technique coverage

If a technique achieved 20–70% coverage of sub-techniques, I classified it as Some Confidence, while those with 70% or higher were upgraded to High Confidence. Referring back to Unsecured Credentials, I left it in the Some Confidence category due to its 57% sub-technique coverage.

While this scoring methodology may not apply universally, it sufficed for my aggregation of capabilities across four different security tools using signature, behavior, and analytics-based detection.

In general, MITRE suggests that detection confidence levels should only be upgraded if multiple detection types contribute to Some or High Confidence, which was relevant in this case.

The Results

For those interested in observing the aggregation process, I shared a reel on my Instagram account a few weeks ago. Unfortunately, it's challenging to convey that visually stimulating content in writing, so instead, I’ll outline the progression of the capabilities mapping from one tool to the next.

Phase 1: Endpoint Detection & Response (EDR) — EDR tool capabilities mapped

In the initial heatmap showing the priority TTPs marked in yellow, orange, and red, you'll notice significant changes, with many techniques transitioning to light orange and green. This indicates that these techniques were detected with Some or High Confidence by the assessed EDR tool.

Phase 2: Internet Filtering (Proxy) — Internet filtering tool capabilities layered over EDR tool capabilities

This matrix reveals a greater prevalence of green, signifying that the assessed Internet filtering tool covers more techniques and sub-techniques than the EDR tool. This illustrates the importance of a defense-in-depth strategy.

Phase 3: Next-Gen Firewall (Perimeter Security) — Perimeter security capabilities layered over EDR and Internet filtering tools

While assessing the capabilities of the NGFW tool, I discovered that many techniques were already covered by the previous two tools. However, one technique and several sub-techniques were uniquely detected by this tool, enhancing overall coverage.

Phase 4: SIEM (Security Information & Event Management) — Final phase with SIEM tool capabilities mapped to the matrix

Again, an increase in green indicates that the SIEM capabilities covered additional sub-techniques, improving the confidence score from 20–70% sub-technique coverage to over 70%. This warranted the upgrade of some techniques from Some Confidence to High Confidence.

With the Security Operations assessment phase of my project concluded, I now have a clear understanding of the priority TTP coverage provided by my security toolset, along with the gaps that remain.

As observed in the final matrix, some techniques still appear in yellow, indicating a lack of detection confidence. Addressing these gaps is crucial, along with identifying strategies to enhance detection capabilities for techniques rated as orange or Some Confidence.

Stay tuned for Part Four, where I will delve deeper into these techniques, review MITRE's detection and mitigation recommendations, and propose strategies to close these gaps.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Unlocking the Secrets to Becoming an Extraordinary Developer

Discover the journey of becoming an extraordinary developer through challenges, collaboration, and continuous learning.

The Top 10 Emerging Technologies of 2024 That Will Change Everything

Discover the groundbreaking technologies of 2024 that are set to transform our lives and the ethical questions they raise.

Unlocking Muscle Growth and Enhancing Health with Strength Training

Discover the benefits of strength training and protein intake for building muscle and improving overall health.

Embracing Writing: A Journey That Began with My First Salary

A personal account of how writing became a lifelong passion after a memorable competition and first salary.

Innovative Insights: Comparing Python, Julia, and Rust

A deep dive into the similarities and differences between Python, Julia, and Rust, highlighting their features and use cases.

Exploring the Benefits and Drawbacks of Substack and Medium

A detailed comparison of the advantages and disadvantages of Substack and Medium for writers.

Creating a Comprehensive Kubernetes DevSecOps Software Factory on AWS

Explore how to build a robust Kubernetes-based DevSecOps software factory on AWS using CloudFormation for secure and efficient development.

Finding Balance: The Importance of Taking Breaks for Productivity

Discover the significance of taking breaks to enhance productivity and prevent burnout in this insightful exploration.