hack3rs.ca network-security
/learning/threat-informed-defense-attack-technique-mapping :: module-11

student@hack3rs:~/learning$ open threat-informed-defense-attack-technique-mapping

Threat-Informed Defense Using ATT&CK-Style Technique Mapping

Use ATT&CK-style mapping to organize detections and identify blind spots by technique, telemetry source, and control layer. The goal is coverage clarity, not checkbox compliance.

Technique mapping helps teams move beyond isolated alerts and toward a coherent detection strategy. It also improves communication with leadership by showing where coverage exists and where it does not.

learning-objectives

  • $Map detections to adversary behaviors/techniques without overclaiming coverage.
  • $Differentiate prevention, detection, and investigative coverage.
  • $Identify telemetry dependencies for each detection claim.
  • $Prioritize coverage improvements using real environment risk and incidents.

example-dataflow-and-observation-paths

Use these example dataflows to trace how activity moves through systems and where a defender can observe evidence. This is how learners move from memorizing terms to thinking like investigators.

  • $Observed attacker behavior -> mapped to technique category -> linked to existing detections and telemetry -> gaps identified -> new data collection or rule work prioritized.
  • $Detection inventory -> technique mapping -> confidence score -> test method -> validation result -> coverage map updates.
  • $Incident findings feed mapping -> repeated gaps become roadmap items for telemetry, parsing, or rule improvements.

baseline-normal-before-debugging

  • $Detection coverage claims include telemetry source and validation method.
  • $Technique mappings are updated after incidents and tests, not only during audits.
  • $Coverage maps distinguish prevention from alerting and investigative visibility.
Expert tip: Baseline normal behavior before writing detections or escalating anomalies. Most tuning and triage errors come from skipping this step.

concept-breakdown-and-mastery

1. What Technique Mapping Is (and Is Not)

$ core idea: Technique mapping organizes security controls and detections against common attacker behaviors. It is useful for planning and gap analysis, but it does not guarantee detection of every implementation of a behavior.

$ defender angle: Avoid “green matrix syndrome,” where teams mark a technique as covered because one generic alert exists. Real coverage depends on data quality, tuning, scope, and analyst ability to interpret the signal.

$ prove understanding: Map detections to adversary behaviors/techniques without overclaiming coverage.

2. Building a Practical Coverage Map

$ core idea: For each detection, record: behavior/technique intent, data sources required, scope (which assets/segments), confidence, false-positive profile, and test method. This makes coverage claims auditable.

$ defender angle: Separate preventive controls from detections and detections from investigative procedures. A blocked action is not the same thing as a detection you can triage and hunt with later.

$ prove understanding: Differentiate prevention, detection, and investigative coverage.

3. Using Mapping to Drive Improvements

$ core idea: Coverage gaps are often telemetry gaps rather than rule gaps. The right fix may be enabling logs, fixing parsing, or improving sensor placement before writing a new detection.

$ defender angle: Use purple-team style validation or controlled tests to confirm detections work as expected. Every mapped detection should have an associated validation approach where possible.

$ prove understanding: Identify telemetry dependencies for each detection claim.

deep-dive-notes-expanded

Work through the sections in order. For each section, learn the theory, identify normal behavior, identify failure patterns, then validate with packet/log/CLI evidence.

1. What Technique Mapping Is (and Is Not)

Technique mapping organizes security controls and detections against common attacker behaviors. It is useful for planning and gap analysis, but it does not guarantee detection of every implementation of a behavior.

Avoid “green matrix syndrome,” where teams mark a technique as covered because one generic alert exists. Real coverage depends on data quality, tuning, scope, and analyst ability to interpret the signal.

Use ATT&CK-style mapping as a living model tied to evidence and tests, not a static slide for reporting.

Normal Behavior

Detection coverage claims include telemetry source and validation method.

Failure / Abuse Pattern

Coverage is marked as complete without testing or scope notes.

Evidence To Collect

Map detections to adversary behaviors/techniques without overclaiming coverage.

2. Building a Practical Coverage Map

For each detection, record: behavior/technique intent, data sources required, scope (which assets/segments), confidence, false-positive profile, and test method. This makes coverage claims auditable.

Separate preventive controls from detections and detections from investigative procedures. A blocked action is not the same thing as a detection you can triage and hunt with later.

Start with techniques relevant to your environment and incidents. A perfect global matrix is less useful than strong coverage on techniques that actually affect your technology stack.

Normal Behavior

Technique mappings are updated after incidents and tests, not only during audits.

Failure / Abuse Pattern

Telemetry dependencies are undocumented, so detections silently fail when data changes.

Evidence To Collect

Differentiate prevention, detection, and investigative coverage.

3. Using Mapping to Drive Improvements

Coverage gaps are often telemetry gaps rather than rule gaps. The right fix may be enabling logs, fixing parsing, or improving sensor placement before writing a new detection.

Use purple-team style validation or controlled tests to confirm detections work as expected. Every mapped detection should have an associated validation approach where possible.

Review detections after incidents and near misses. If analysts repeatedly need manual pivots, consider improving enrichment or adding companion detections.

Normal Behavior

Coverage maps distinguish prevention from alerting and investigative visibility.

Failure / Abuse Pattern

Technique mapping becomes a reporting artifact instead of an engineering tool.

Evidence To Collect

Identify telemetry dependencies for each detection claim.

terminal-walkthroughs-with-example-output

These walkthroughs show representative commands plus example output so learners know what success and useful evidence look like. Treat the output as a pattern guide, not a fixed transcript.

Coverage Mapping Workspace

Beginner
Command
mkdir -p coverage-map && cd coverage-map
Example Output
# command executed in lab
# review output for expected fields, errors, and anomalies

$ why this matters: Use this step to validate coverage mapping workspace before moving on to more advanced commands in the same block. Focus on interpreting the output, not just running the command.

Detection Inventory Seed

Intermediate
Command
rg -n 'title:|name:' ../detections 2>/dev/null | head -20 || true
Example Output
# command executed in lab
# review output for expected fields, errors, and anomalies

$ why this matters: Use this step to validate detection inventory seed before moving on to more advanced commands in the same block. Focus on interpreting the output, not just running the command.

Gap Review Notes

Advanced
Command
printf 'Technique gaps:\nTelemetry gaps:\nValidation gaps:\n' > gap-review.txt
Example Output
# file created successfully

$ why this matters: Use this step to validate gap review notes before moving on to more advanced commands in the same block. Focus on interpreting the output, not just running the command.

cli-labs-and-workflow

Run these commands only in environments you own or are explicitly authorized to test. Use a lab VM, sandbox network, or approved internal test segment for practice.

Coverage Mapping Workspace

Beginner
mkdir -p coverage-map && cd coverage-map
printf 'technique,detection,data_source,scope,confidence,test_method\n' > coverage.csv
column -s, -t coverage.csv

Run in a lab or authorized environment. Record what fields change when you alter the test conditions.

Detection Inventory Seed

Intermediate
rg -n 'title:|name:' ../detections 2>/dev/null | head -20 || true
jq '.[] | {name, data_sources}' detections.json 2>/dev/null | head

Run in a lab or authorized environment. Record what fields change when you alter the test conditions.

Gap Review Notes

Advanced
printf 'Technique gaps:\nTelemetry gaps:\nValidation gaps:\n' > gap-review.txt
nano gap-review.txt

Run in a lab or authorized environment. Record what fields change when you alter the test conditions.

expert-mode-study-loop

  • $Explain the concept in plain language without reading notes.
  • $Show how to validate the concept with logs, packets, or commands.
  • $Name at least one common failure mode and how to detect it.
  • $Document what 'normal' looks like before testing edge cases.
Progress marker: You are ready to move on when you can explain the topic, run the commands, and interpret the output without guessing.

knowledge-check-and-answer-key

Try answering these from memory before looking at the hints. These questions are designed to test understanding of concepts, dataflow, and evidence collection.

1. What Technique Mapping Is (and Is Not)

Questions
  • ?How would you explain "What Technique Mapping Is (and Is Not)" to a new defender in plain language?
  • ?What does normal behavior look like for what technique mapping is (and is not) in your lab or environment?
  • ?Which logs, packets, or commands would you use to validate what technique mapping is (and is not)?
  • ?What failure mode or attacker abuse pattern matters most for what technique mapping is (and is not)?
Show answer key / hints
Answer Key / Hints
  • #Map detections to adversary behaviors/techniques without overclaiming coverage.
  • #Detection coverage claims include telemetry source and validation method.
  • #mkdir -p coverage-map && cd coverage-map
  • #Coverage is marked as complete without testing or scope notes.

2. Building a Practical Coverage Map

Questions
  • ?How would you explain "Building a Practical Coverage Map" to a new defender in plain language?
  • ?What does normal behavior look like for building a practical coverage map in your lab or environment?
  • ?Which logs, packets, or commands would you use to validate building a practical coverage map?
  • ?What failure mode or attacker abuse pattern matters most for building a practical coverage map?
Show answer key / hints
Answer Key / Hints
  • #Differentiate prevention, detection, and investigative coverage.
  • #Technique mappings are updated after incidents and tests, not only during audits.
  • #rg -n 'title:|name:' ../detections 2>/dev/null | head -20 || true
  • #Telemetry dependencies are undocumented, so detections silently fail when data changes.

3. Using Mapping to Drive Improvements

Questions
  • ?How would you explain "Using Mapping to Drive Improvements" to a new defender in plain language?
  • ?What does normal behavior look like for using mapping to drive improvements in your lab or environment?
  • ?Which logs, packets, or commands would you use to validate using mapping to drive improvements?
  • ?What failure mode or attacker abuse pattern matters most for using mapping to drive improvements?
Show answer key / hints
Answer Key / Hints
  • #Identify telemetry dependencies for each detection claim.
  • #Coverage maps distinguish prevention from alerting and investigative visibility.
  • #printf 'Technique gaps:\nTelemetry gaps:\nValidation gaps:\n' > gap-review.txt
  • #Technique mapping becomes a reporting artifact instead of an engineering tool.

lab-answer-key-expected-findings

Use this as a baseline answer key for labs and walkthroughs. Replace these with environment-specific observations as you practice in real or simulated networks.

Expected Normal Findings
  • +Detection coverage claims include telemetry source and validation method.
  • +Technique mappings are updated after incidents and tests, not only during audits.
  • +Coverage maps distinguish prevention from alerting and investigative visibility.
Expected Failure / Anomaly Clues
  • !Coverage is marked as complete without testing or scope notes.
  • !Telemetry dependencies are undocumented, so detections silently fail when data changes.
  • !Technique mapping becomes a reporting artifact instead of an engineering tool.

hands-on-labs

  • $Map five existing alerts to ATT&CK-style techniques and document their data source dependencies.
  • $Identify two techniques with weak coverage and propose telemetry or detection improvements.
  • $Create a coverage matrix that distinguishes prevention, alerting, and hunting-only visibility.

common-pitfalls

  • $Marking techniques as covered without testing or scope notes.
  • $Treating a framework as a mandate to monitor everything equally.
  • $Ignoring telemetry quality while focusing only on detection logic.

completion-outputs

# A technique coverage map with confidence notes
# A telemetry dependency inventory for critical detections
# A prioritized coverage improvement roadmap
<- previous page Incident Response Playbooks Aligned to Recognized Cybersecurity Framework Functions -> next page Post-Incident Review, Hardening Backlog, and Detection Coverage Gaps
learning-path-position

Response & Improvement / Weeks 9-10 · Module 11 of 12